id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
226255846
pes2o/s2orc
v3-fos-license
Poly(I:C) preconditioning protects the heart against myocardial ischemia/reperfusion injury through TLR3/PI3K/Akt-dependent pathway Emerging evidence suggests that Toll-like receptors (TLRs) ligands pretreatment may play a vital role in the progress of myocardial ischemia/reperfusion (I/R) injury. As the ligand of TLR3, polyinosinic-polycytidylic acid (poly(I:C)), a synthetic double-stranded RNA, whether its preconditioning can exhibit a cardioprotective phenotype remains unknown. Here, we report the protective effect of poly(I:C) pretreatment in acute myocardial I/R injury by activating TLR3/PI3K/Akt signaling pathway. Poly(I:C) pretreatment leads to a significant reduction of infarct size, improvement of cardiac function, and downregulation of inflammatory cytokines and apoptotic molecules compared with controls. Subsequently, our data demonstrate that phosphorylation of TLR3 tyrosine residue and its interaction with PI3K is enhanced, and protein levels of phospho-PI3K and phospho-Akt are both increased after poly(I:C) pretreatment, while knock out of TLR3 suppresses the cardioprotection of poly(I:C) preconditioning through a decreased activation of PI3K/Akt signaling. Moreover, inhibition of p85 PI3K by the administration of LY294002 in vivo and knockdown of Akt by siRNA in vitro significantly abolish poly(I:C) preconditioning-induced cardioprotective effect. In conclusion, our results reveal that poly(I:C) preconditioning exhibits essential protection in myocardial I/R injury via its modulation of TLR3, and the downstream PI3K/Akt signaling, which may provide a potential pharmacologic target for perioperative cardioprotection. INTRODUCTION As the most lethal manifestation of coronary heart disease, acute myocardial infarction affects more than 7 million individuals worldwide annually, 1 accounting for a substantial footprint on global health burden, especially in developing countries. In the initial phase of myocardial infarction, partially or complete occluded coronary deprives the downstream myocardium of nutrients and oxygen, and restoration of blood flow aborts deleterious effect of ischemia. However, reperfusion itself triggers a subsequent wave of insult, termed ischemia/reperfusion (I/R) injury, 2 which further threatens myocardial recovery and hence aggregates irreversible heart damage in survivors. In response to damage, injured or dying myocardial cells releases damage-associated molecular patterns (DAMPs), which are detected and recognized by resident cardiac immune cells. More recently, it has become common that the innate immune system responds to I/R injury-induced sterile inflammation rapidly and contributes an essential role in modulating the pathological process, though more remains to be elucidated. 3 Thus, there is considerable interest focus on defining the mechanism of the innate immune system in hopes of mitigating the deleterious effect of reperfusion while maintaining a healthy healing process. As the first-line defense of the innate immune system, Toll-like receptors (TLRs), one of the pattern recognition receptors, have been reported to recognize DAMPs released during I/R injury. Due to the homeostatic effect by delimiting tissue injuries, TLRs and their ligands appear to be a promoting target with great potential for the treatment of reperfusion injury. In fact, TLRs have participated in the tolerance to ischemic and inflammatory resistance induced by preconditioning. For instance, pretreatment with TLR2 ligand Pam3CSK4 mimicked the cytoprotection of ischemic preconditioning in cardiac I/R injury. 4 Also, activation of TLR4 by lipopolysaccharide (LPS) preconditioning exhibited reduced infarct size and improved cardiac function in rats, mice, and rabbits. 5 However, the underlying mechanism of TLRs ligand preconditioning is not entirely understood. For TLR2 and TLR4, some studies have demonstrated that the cytoprotective effect of their ligands preconditioning, may be related with the activation of phosphatidylinositol 3-kinase (PI3K)/Akt signaling pathway, which prevents pro-inflammatory and apoptosis events through cross-talking with nuclear factor (NF)-κB signaling pathway. 6 The cardioprotection of TLRs-cardiac preconditioning has addressed a cross-talk between TLRs and PI3K/Akt signaling. Within multiple innate immune receptors, TLRs have been identified in a broad series of cell types. Of note, the expression level of messenger RNA for TLR2-4 are nearly tenfold higher than the rest of TLRs in human hearts and cardiomyocytes from neonatal rats. 7,8 Instead of myeloid differentiation factor 88 (Myd88)-dependent manner in other TLRs, TLR3 engages cytoplasmic adapter TIR domaincontaining adapter inducing interferon [IFN]-β (TRIF) and finally induces translocation of NF-κB and activation of interferonregulatory factor 3 (IRF3), resulted in induction of inflammatory and apoptotic mediators. 9 Recently, polyinosinic-polycytidylic acid (poly(I:C)), the specific ligand of TLR3, has been demonstrated to be able to reduce cerebral infarct size and increases the survival rate of mice following acute cerebral ischemia damage by activating of TLR3/TRIF pathway, eliciting limitation in the systemic inflammatory response and caspase3 activity. [10][11][12][13][14] However, the role of poly(I:C) preconditioning on myocardial I/R injury and the related mechanisms remained unexplored. In the present study, we investigated the effect of poly(I:C) preconditioning on myocardial I/R injury and observed that poly(I:C) preconditioning reduced infarct size and improved cardiac function. Importantly, we demonstrated for the first-time poly(I:C) induced interaction between TLR3 and PI3K in vivo, which was accelerated by treat time and I/R insult. Subsequently, we found that pharmacological inhibition of the PI3K activity in vivo or knockdown of Akt through siRNA transfection in vitro abolished the cytoprotective effect of poly(I:C). Collectively, these findings suggest that poly(I:C) preconditioning enhances the activation of TLR3 and its interaction with PI3K/Akt signaling pathway and thus yields cardiac tolerance against myocardial I/R injury. RESULTS Poly(I:C) preconditioning ameliorated myocardial I/R injury Promoted by results of previous researches in which poly(I:C) preconditioning showed a neuroprotective effect against cerebral I/R injury, 11 we hypothesized that poly(I:C) preconditioning could confer protection of heart subjected to myocardial I/R injury. Since the infarct size corresponds to the risk of developing heart failure, it is inevitable to investigated whether poly(I:C) preconditioning will limit infarct development after I/R injury. Poly(I:C) preconditioning displayed a considerable reduction in the myocardial infarct size~70% of the vehicle group subjected to I/R, and two groups had commensurate AAR (Fig. 1a, b). Next, we assessed the change of the cardiac function. Echocardiography revealed that the left ventricular end-systolic internal dimension (LVIDs) was slightly decreased in the poly(I:C) preconditioned group compared with the control group. In contrast, no difference in left ventricular end-diastolic internal dimension (LVIDd) was observed between these two groups. Again, poly(I:C) preconditioning exhibited a clear tendency for recovery from cardiac contractile dysfunction, which indicated by increased ejection fraction (EF) and fractional shortening (FS) (Fig. 1c, d). Then, we performed the hematoxylin and eosin (HE) staining to appraise the cardiac pathological changes. Apparently, the poly(I:C) pretreated hearts showed less severe myocardial damage, as evidenced by the relieving edema, steatosis, and subendocardial hemorrhage following myocardial I/ R injury (Fig. 1e, f). Furthermore, we analyzed the cardiac biomarkers in serum, which have evolved as essential tools in cardiology and can help to predict adverse cardiovascular events according to recent publications. 15 In accordance with the above observations, troponin I, N-terminal pro B-type natriuretic peptide (NT-pro BNP), creatine kinase-MB (CK-MB) and lactate dehydrogenase (LDH) were all significantly decreased at 6 h after I/R injury (Fig. 1g). Thus, these data indicated that poly(I:C) treatment before exposure to I/R yielded an infarct-sparing effect. Poly(I:C) preconditioning suppressed the inflammatory and apoptotic responses in vivo As inflammation plays a predictable role in adverse cardiovascular events following I/R injury, 16 next, we aimed to elucidate whether inflammation response was involved in the effect of poly(I:C) pretreatment. The results of PCR revealed that the transcript level of pro-inflammatory cytokines, such as IL-1β, TNF-α, and IL-6, was markedly reduced in poly(I:C) pretreated mice compared with control ones (Fig. 1h), indicating the inhibition of proinflammatory cytokines transcription by poly(I:C) preconditioning. Moreover, apoptosis acts as another vital component of cardiomyocyte death within multiple cardiovascular disease. 17 Among a wide range of receptors, Bax is generally associated with apoptosis, well Bcl-2 acts as an antiapoptotic regulator. 18 Therefore, the ratio of Bcl-2/Bax is used to indicate the anti-apoptotic level. We performed western blot and found that the expression of Bcl-2/Bax was enhanced after poly(I:C) pretreatment, and this was correlated with the suppressed expression of apoptosis-related molecule cleaved caspase3 (Fig. 1i, j). To further corroborate this, TUNEL assay was performed, and apoptotic death in cardiomyocytes was notably reduced by poly(I:C) preconditioning (Fig. 1k, l). Taken together, the results suggested that poly(I:C) preconditioning potently inhibited inflammatory response and reperfusioninduced apoptosis. Poly(I:C) pretreatment induced cardioprotection through activation of TLR3 As a synthetic double-stranded RNA, poly(I:C) is recognized and specifically binds to TLR3, then activates downstream transcription factors via its intercellular signaling pathway. 19 To further confirm the cytoprotective effect of poly(I:C) preconditioning was via TLR3 activation in our current study, we assessed the expression level of TLR3, TLR4, and their downstream adapters. In the presence of I/R stimulus, poly(I:C) significantly upregulated protein expression of TLR3 and TRIF, in contrast, TLR4 expression was not modified at all (Fig. 2a, b). In fact, with regard to mRNA expression of TLR3, TRIF, IFN-α, and IFN-β but not TLR4 and Myd88, were increased in poly(I:C) pretreated mice (Fig. 2c). Consistent with these alterations, immunofluorescent TLR3-positive area was also higher in poly(I:C) pretreated mice than that in controls (Fig. 2d, e). Based on these data, poly(I:C) preconditioning apparently induced TLR3 activation. NF-κB, an innate immune signaling mediator, which is ultimately activated in TLR3 signaling cascade. 20 As previously reported, 14 I/R stimuli triggered the binding activity of p65 NF-κB, which significantly decreased with poly(I:C) preconditioned (Supplementary Fig. 2a). Besides, the phosphorylation level of p65 NF-κB was also reduced by poly(I:C) preconditioning in ischemic myocardium tissue ( Supplementary Fig. 2b, c). Thus, suggesting that poly(I:C) preconditioning induced TLR3 activation, therefore, responded to I/R injury and modulated tissue damage following I/R insults in hearts. Since the effect of poly(I:C) preconditioning correlated with activation of TLR3, we further verified this phenomenon in TLR3 knockout mice. No difference was found in infarct size between tlr3 −/− mice pretreated with vehicle and poly(I:C) (Fig. 2f, g), as consistent with the absence of change in cardiac functional markers, such as troponin I, NT-pro BNP, CKMB, and LDH (Fig. 2h), indicating that poly(I:C) directly relieves I/R damage via TLR3. Poly(I:C) preconditioning enhanced phosphorylation of PI3K/Akt In particular, PI3K and its downstream kinase Akt are activated by TLR ligand through preconditioning mechanism as previously reviewed 6 and act as negative feedback to limit pro-inflammatory and apoptosis events in response to harmful stimuli. Poly(I:C) preconditioning upregulated phosphorylation of PI3K and Akt, and accordingly, the protein expression level of p70 S6 kinase, a critical downstream target of PI3K/Akt/mTOR pathway, was substantial increased (Fig. 3a, b). Moreover, several studies have identified a cross-talk between TLRs and PI3K/Akt signaling pathway. 21,22 Therefore, we further explored the colocalization of TLR3 with phospho-PI3K and phospho-Akt. It was apparent that the combination of TLR3 and phospho-PI3K (Fig. 3c, d) or phospho-Akt (Fig. 3e, f) in ischemic heart lesions was more pronounced in poly(I:C) preconditioned mice subjected to myocardial I/R injury. We also detected the phosphorylation levels of PI3K and Akt in tlr3 −/− mice, which were significantly decreased compared with poly(I:C) pretreated ones compared with the wile type ( Fig. 3g, h). In contrast, there was no TLR3-positive cardiomyocyte observed in tlr3 −/− mice ischemic myocardium tissue ( Supplementary Fig. 3). All these findings indicated that poly (I:C) preconditioning promoted activation of PI3K/Akt signaling pathway via TLR3. Poly(I:C) pretreatment inhibited inflammatory response and apoptosis cell death in vitro The effect of poly(I:C) preconditioning-induced cytoprotection on cardiomyocytes was further studied in vitro by culturing adult mouse cardiomyocyte and H9c2 cell lines. In isolated adult mouse cardiomyocytes (Fig. 4a), poly(I:C) pretreatment significantly attenuated oxygen-glucose deprivation (OGD)-induced the increase in transcript levels of pro-inflammatory cytokines CXCL-1, CXCL-2, IL-1β, and IL-6 ( Fig. 4b). Also, poly(I:C) preconditioned adult mouse cardiomyocytes showed a lower level of apoptosis as indicated by increased the ratio of Bcl-2/Bax and decreased expression level of cleaved caspase3 (Fig. 4c, d). Besides, we performed the TUNEL assay in primary neonatal mouse cardiomyocytes (NMCMs) and found that the number of TUENL-positive cells was also notably decreased (Fig. 4e, f). To further investigate whether poly(I:C) preconditioning protects cardiomyocyte, we used an additional cardiac myocyte, H9c2 cell lines, to verify this possibility. The H9c2 cell lines were employed to test different periods of reoxygenation and different poly(I:C) concentration by cell counting kit-8 (CCK8) assay ( Supplementary Fig. 4). We chose the 12.5 μg/ml poly(I:C) and 12 h reoxygenation in following H9c2 studies. The results showed that pretreatment with poly(I:C) significantly reduced proinflammatory factors transcription level compared with that in controls (Fig. 4g). Next, we also detected the expression of several apoptotic phenotype markers, and we found that poly(I:C) pretreatment suppressed apoptosis by downregulating Bax and cleaved caspase 3 meanwhile upregulating Bcl-2 ( Fig. 4h, i). TUNEL assay also provided further supporting data, indicating that the incidence of apoptosis was positively correlated with poly(I:C) preconditioning (Fig. 4j, k). Thus, these data suggested that poly(I: C) preconditioning might limit cell death by inhibiting OGDinduced inflammatory and apoptosis responses, as consistent with the results in vivo. Pharmacological inhibition of PI3K or Akt depletion abolished the protective effect of poly(I:C) preconditioning As shown in Fig. 3, the western blot and immunofluorescent staining revealed the occurrence of poly(I:C) preconditioninginduced cardioprotection was correlated with the PI3K/Akt pathway. To determine the role of the PI3K/Akt pathway in the cytoprotective effect, the specific PI3K inhibitor, LY294002, was utilized. LY294002 was injected i.p. 15 min before myocardial ischemia. As it was shown in Fig. 5a, b, LY294002 completely abolished reduction in myocardial infarct size induced by poly(I:C) preconditioning. Also, LY294002 reversed the effect in cardiac function caused by poly(I:C) preconditioning (Fig. 5c, d). Moreover, no statistically significant changes were observed in the transcript level of pro-inflammatory cytokines ( Supplementary Fig. 5a). Moreover, LY294002 suppressed the activation of TLR3 and downstream receptors, except IFN-α, while the expression of TLR4 and Myd88 did not show any significant change (Fig. 5e, Supplementary Fig. 5b). Moreover, inhibition of PI3K led to a reduction in the phosphorylation level of PI3K and Akt, which were promoted by poly(I:C) pretreatment, along with the protein level of p70 S6 kinase ( Fig. 5f, g). These results implied that the PI3K/Akt signaling pathway might mediate the protective effect of poly(I:C) preconditioning, while its inhibitor, LY294002, could block this effect. According to the reference, Akt, the downstream kinase of PI3K, can transduce enzymes protecting the myocardium from I/R injury. 23 As Akt1 is ubiquitously expressed and its critical role in cell survival, 24,25 we further generated Akt1 knockdown cardiomyocytes through the transfection of Akt1 siRNA. We have found the expression of phospho-PI3K in neonatal mice cardiomyocytes significantly increased, while level of Akt were decreased and phospho-Akt were hardly detected treated with or without poly(I:C) (Fig. 5h, i). Also, we examined the ratio of dead cells and found that Akt depletion exacerbated cell death after OGD insult and was unable to rescue by poly(I:C) pretreatment (Fig. 5j). Thus, we believed that Akt is required for the cytoprotective effect of poly(I:C) preconditioning against OGD damage. Poly(I:C) activated TLR3 and induced its enhanced interaction with PI3K Considering the close link between TLR3 and PI3K/Akt signaling pathway according to the results mentioned above, a separate set of immunoprecipitation experiments was designed to determine whether TLR3 interacts with PI3K in vitro and in vivo. TLR3 overexpression resulted in a significant increase of tyrosine residue phosphorylation after poly(I:C) stimulation in HEK 293 cells compared with corresponding controls (Fig. 6a). Next, we found that the TLR3-PI3K interaction exhibited a poly(I:C)dependent fashion and was similar to the kinetics of TLR3 tyrosine residue phosphorylation (Fig. 6b). Besides, we observed that poly(I:C) treatment stimulated TLR3 tyrosine residue phosphorylation, resulting in increased association with PI3K when mice were injected with poly(I:C) for 0, 15, 30, 60, 120, and 240 min (Fig. 6c). More importantly, we detected the interaction between TLR3 and PI3K in mice subjected to I/R injury, and the poly(I:C) preconditioning markedly increased the TLR3-PI3K association compared with that in controls (Fig. 6d). To acquire a better location of TLR3 and PI3K interactions, we utilized proximal ligation assay (PLA), which directly identified protein interactions by fluorescence. Here, we demonstrated a positive PLA reaction between TLR3 and PI3K, and the number of reactions was much higher in the poly(I:C) preconditioning group (Fig. 6e, f), consistent with the immunoprecipitation results. In summary, our data revealed that TLR3 was activated by triggering the phosphorylation of tyrosine residue and followed by increased interaction with PI3K after poly (I:C) pretreatment. Poly(I:C) pretreatment protected mice hearts from I/R-induced injury via limitation of inflammatory and apoptosis. a Representative photographs of TTC-stained, Evans Blue perfused heart sections obtained from poly(I:C) or vehicle pretreated mice subjected to I/R injury (45 min ischemia/24 h reperfusion). Red, AAR; blue, healthy myocardium tissue; white, infarcted tissue; scale bars, 1 mm. b Quantitative data of left ventricular infarct size and AAR in poly(I:C) or vehicle pretreated mice (n = 9) (experimental groups were compared via unpaired student's t test, bars indicate the SEM, *P < 0.05; **P < 0.01). c Representative transthoracic echocardiography of 24 h before and after I/R injury. d Average data of LVIDd, LVIDs, EF, and FS measured by echocardiography in poly(I:C) or vehicle pretreated mice subjected to I/R (n = 15) (All experiment groups were compared via one-way ANOVA with a Bonferroni's multiple comparisons test, bars indicate the SEM, *P < 0.05; **P < 0.01). e Representative photographs of HE staining heart sections in all groups. Scale bars (first panel of every group): 50 μm; scale bar (second panel of every group): 20 μm. f Morphological evaluation of myocardial injury after I/R in each group (n = 6) (all experiment groups were compared via unpaired Student's t test, bars indicate the SEM, *P < 0.05; **P < 0.01). g The cardiac functional markers in serum were analyzed with an ELISA kit (n = 6). h Expression of inflammatory cytokine IL-1β, TNF-α, and IL-6 as assessed qRT-PCR in hearts subjected to I/R with poly (I:C) or vehicle pretreatment (n = 6). i, j Representative western blot (i) and average data (j) for Bax, Bcl-2 and cleaved caspase 3 in mice subjected to sham and I/R with poly(I:C) or vehicle pretreatment (n = 6). k, l Representative microphotographs (k) and average data (l) for TUNEL assay of heart sections in mice subjected to I/R (n = 6). scale bars, 200 μm. (All experimental groups were compared via one-way ANOVA with a Bonferroni's multiple comparisons test, bars indicate the SEM, *P < 0.05; **P < 0.01). Veh, vehicle; PIC, poly(I:C); LV, left ventricle; I/R, ischemia/reperfusion DISCUSSION Our findings showed that poly(I:C) pretreatment group had significantly reduced myocardial infarction and better-preserved cardiac function after myocardial I/R injury compared with the control group. Also, we have found that poly(I:C) preconditioning could significantly decrease the expression levels of proinflammatory cytokines and apoptosis-related molecules after myocardial I/R injury. However, the cardioprotection of poly (I:C) was lost in TLR3 knockout mice, suggesting that TLR3 is essential for poly (I:C)-induced protective effects. Unlike its critical role in viral myocarditis and distinctly different from the mechanisms of cardioprotection observed in TLR3 knockout mice, poly(I:C) preconditioning exhibits myocardial protection via phosphorylation of TLR3 and its enhanced interaction with PI3K and downstream molecules. Moreover, inhibition of p85 PI3K by the administration of LY294002 in vivo and knockdown of Akt by siRNA in vitro could significantly abolish poly(I:C)-induced cardioprotection. Therefore, we concluded that poly(I:C) preconditioning could protect the myocardium against I/R injury in a TLR3/PI3K/Akt-dependent way (Fig. 6g). Recently, increasing evidence indicates that TLRs family and their ligands could be the high potential therapeutic targets in cardiac I/R injury. Among all the TLRs ligands, poly(I:C) has exhibited neuroprotective effects on cerebral I/R injury, and its protection is mainly mediated by a TLR3-mediated mechanism, 10,12,14 specifically, by inhibition of Fas/FADD interaction, activation of TRIF pathway, and downregulation of TLR4 signaling. 11,13 However, to the best of our knowledge, the role of poly(I:C) preconditioning in myocardial I/R injury and the related mechanisms have not been reported previously. With an increased aging population worldwide, there could be more aged patients at high risk of ischemic cardiac damage due to coronary diseases and surgical procedures. Our study regarding the cardioprotection of poly(I:C) as an antecedent treatment may help provide potential benefits to patients who undergo major surgery. Indeed, as a synthetic dsRNA, poly(I:C) has been applied as a clinical therapy for humoral immune-related disease and developed into a promising cancer vaccine adjuvant, 26 which gives the protective effect of poly(I:C) a promising translation prospect. In the current study, activation of TLR3 and its downstream signaling through poly(I:C) preconditioning was shown to have similar cardioprotection as that has been observed in the TLR3 deficient mice. 27 This phenomenon has also been reported in other TLRs. For instance, tlr4 −/− mice and exhibited decreased infarct size compared with wile type after myocardial I/R injury. 28 While studies also found that a small dose of TLR4 ligand, LPS or LTA, preconditioning yields comparable cardioprotection. 29,30 Likewise, this also applies to TLR2. 31,32 Also, back to 1986, serial occlusions of left descending coronary artery, namely ischemic preconditioning, have been found to exert an enormously powerful anti-infarct effect, reducing infarct size to 75%. 33 The underlying principle of both ischemic and TLRs ligands preconditioning may be related to the adaptive responses to sublethal ischemic/innate immune stimulus and confers a tolerance state so that the myocardium is rendered resistant against a subsequent, more severe ischemic insult. Of note, different cardioprotective mechanisms have been observed in TLR-deficiency mice and TLRs ligand preconditioning. For example, the cardioprotective mechanism underlying tlr4 −/− mice is mainly related to blunting I/R-induced NF-κB binding activity, which can significantly improve the recovery of cardiac function and downregulated inflammatory cytokine gene expression, suggesting that the TLR4-mediated NF-kB signaling pathway contributed to I/R injury. 34 However, the cardioprotection induced by TLR4 ligand preconditioning was mediated through activation of the PI3K/Akt signaling pathway. 35 Besides, mice with TLR2 deficiency exhibited smaller infarct size and more preserved cardiac function by limiting leukocyte influx, cytokine production, and proapoptotic pathway. 36 Nevertheless, the study about cardioprotection of TLR2 ligand preconditioning found a different mechanism from that of the TLR2 knockout research. 24 In our study, unlike the mechanism found in TLR3 deficiency-induced cardioprotection, which mainly through inhibition of the TRIF downstream signalings, poly(I:C) preconditioning exhibited cardioprotective effects have been observed mainly through activation of TLR3 and its enhanced interaction with PI3K and downstream signaling. As an essential pro-survival pathway, PI3K/Akt protein cascades signaling pathway plays an essential role in the development of cardioprotection against myocardial infarction and I/R injury. Importantly, we have found increased TLR3 tyrosine residue phosphorylation and interaction with PI3K in both HEK293 cells and heart subjected to OGD or I/R injury after poly(I:C) stimulation, and the PLA data also clearly showed the enhanced interaction between TLR3 and PI3K induced by poly(I:C) preconditioning in I/R injured myocardium. Besides, overexpression of Akt reduces infarct size in the rat heart, 37 and transfection of Akt siRNA exacerbated OGD-induced NMCMs death and eliminated protective of poly(I:C), suggesting Akt as an essential pro-survival protein in cardioprotection. The activation of the PI3K/Akt signaling pathway can limit the pro-inflammatory response and apoptosis, 38 which also was verified by the reduction in the transcript level of pro-inflammatory cytokines and protein expression level of apoptosis-related molecules in our study. Furthermore, Akt phosphorylates downstream protein, such as mTOR/p70 S6 kinase (S6K), has been linked protective effect of ischemic preconditioning. 39 Kim et al. 40 found that overexpression of S6K markedly reduced the activity of NF-κB, upon stimulation of TLR ligand. Similarly, in our study, we have found a higher level of p70 S6K and decreased binding activity of p65 NF-κB in the poly(I:C) preconditioning I/R group compared with the control group. Therefore, we speculated that pretreatment of poly(I:C) increased phosphorylation of TLR3 with subsequent recruitment of the p85 subunit of PI3K, and activation of the phosphorylation of Aktdependent signaling pathway, which led to increased p70 S6K production, reduced inflammatory and apoptotic response, and inhibited p65 NF-κB activity, inducing a cardioprotective phenotype. Interestingly, for the mRNA and protein expression levels of TLR4 and downstream adapter Myd88, we did not find any difference between the poly(I:C) and vehicle pretreated group subjected to I/R injury. However, this is not consistent with that has been found in cerebral I/R injury, in which poly(I:C) pretreatment has been reported to exhibit cardioprotection by downregulating TLR4 signaling via TLR3. For the protection induced by poly(I:C) preconditioning, unique mechanisms that have existed in different organs remain elusive. In our study, we found that the mRNA expression levels of TRIF and IFN-β were both increased in poly(I:C) I/R group compared with the vehicle I/R group, indicating activation of TLR3/TRIF/IFN signaling. Also, Fig. 2 TLR3 activation was responsible for poly(I:C) preconditioning protective effect against I/R injury. a, b Representative western blot (a) and average data (b) for TLR3, TRIF, and TLR4 in hearts subjected I/R (n = 6). c The relative expression of TLR3, TRIF, IFN-α, IFN-β, TLR4, and Myd88, analyzed by qRT-PCR in hearts subjected to I/R with poly(I:C) or vehicle pretreatment (n = 6). (All experimental groups were compared via one-way ANOVA, bars indicate the SEM, *P < 0.05; **P < 0.01). d, e Representative microphotographs (d) and average data (e) for TLR3 (as assessed by TLR3, cTNT, and DAPI fluoresce intensity) of heart sections in mice subjected to 45 min of myocardial ischemia following 24 h of reperfusion (n = 3) (all experiment groups were compared via one-way ANOVA with a Bonferroni's multiple comparisons test, bars indicate the SEM, *P < 0.05; **P < 0.01). Blue, DAPI; green, cTNT; red, TLR3, scale bars, 50 μm. f, g Infarct size was identified with TTC-Evans blue double stain in tlr3 −/− mice, scale bars, 1 mm (n = 3) (two experimental groups were compared via unpaired student's t test, bars indicate the SEM, *P < 0.05; **P < 0.01). h The cardiac functional markers were analyzed with ELISA kit (n = 6) (all experiment groups were compared to tlr3 −/− mice +I/R group via one-way ANOVA with a Dunnett's multiple comparisons test, bars indicate the SEM, *P < 0.05; **P < 0.01). TLR3 KO, TLR3 knockout mice; ns not statistically significant LY294002 inhibited the transcript level of TRIF, IFN-α, and IFN-β after myocardial I/R injury. Therefore, our data indicated that the poly(I:C) preconditioning-induced cardioprotective effect was primarily due to activation of the TLR3/TRIF pathway and its cross-talk with the PI3K/Akt pathway. Admittedly, our findings must be regarded within the restrictions of various other potential limitations. Firstly, ideal cardioprotection drugs must have short-and long-term effects. However, we only examined the infarct volume and left ventricular function 24 h after I/R injury, representing only the acute post-infarct period. A full study of the final infarcts ends and remodeling (which happens on a much longer time scale) will require further research to study how long the window of protection induced by poly(I:C) preconditioning lasts. Second, to promote the translation of drugs from the bench to clinical trials, at least two animal models are needed to test the effect of drugs. Our group is now working on a rat myocardial I/R model with poly (I:C) preconditioning, completing the validity of poly(I:C) preconditioning in several signaling pathways mediating tolerance to I/R injury. Third, we did not compare the protective effects of poly (I:C) with other TLRs ligands preconditioning in the mouse model of myocardial I/R injury. Still, further study is needed to figure out the common key mediator for the TLR ligands preconditioning. CONCLUSIONS In summary, our data indicate that TLR3 ligand preconditioning induced cardiac protection, which is mediated through activation of PI3K/Akt signaling pathway, to reduce overexpression of inflammatory and apoptotic response. These findings further support the hypothesis that TLR3 is essential for mediation of the tolerance against myocardial I/R injury. They also support the hypothesis that the response observed in poly(I:C) preconditioning wherein the poly(I:C) exposure leads to a reprogrammed TLR3 signaling pathway in response to I/R injury to produce protection. Understanding the association between poly(I:C) preconditioning and downstream signaling molecules would highlight the role of TLR3 during myocardial I/R injury and contribute fundamental scientific perception for therapeutic methods. Our data will hopefully support the application of poly (I:C) as a potential preventive and therapeutic target for perioperative cardioprotection. Experimental animals The tlr3 −/− mice were purchased from the Jackson Laboratory and backcrossed to C57BL/6 mice for more generations. Male C57BL/6 mice Experimental design We employed a preconditioning regimen to investigate the effect of poly(I:C) on myocardial I/R (Supplementary Fig. 1a). C57BL/6 mice were divided into four groups: the vehicle sham group (n = 20 per group), the poly(I:C) sham group (n = 20 per group), the vehicle I/R group (n = 30 per group) and the poly(I:C) I/R group (n = 30 per group). The poly(I:C) (Enzo, 0205160) was dissolved in 0.9% sterile saline to reach a concentration of 1 μg μl −1 . We found the 12.5 mg kg −1 dose of poly(I:C) produced the best effect among different doses (5, 12.5, and 25 mg kg −1 , data not shown). Poly(I:C) were i.p. injected 12 h before ischemia (45 min) followed by 24 h reperfusion without washing out. The PI3K inhibitor LY294002 (Calbiochem, 440202) was applied to investigate the underlying mechanism related to the TLR3/PI3K signaling pathway. Subjects separated into the poly(I:C) sham group, the poly(I:C) and LY294002 sham group, the poly(I:C) I/R group, and the poly(I:C) and LY I/R group (n = 20/group). The LY294002 groups accepted injection i.p. 0.03 mg g −1 15 min before ischemia (45 min) without washing out. Every pretreatment was blind to the researchers during all experiments and data analysis. Cell culture and OGD The H9c2 (2-1) cells were cultured in Dulbecco's modified Eagle's medium(DEMEM; Gibco,12430104) containing 4 mM L-glutamine, 4.5 g l −1 glucose, penicillin/streptomycin (100 U/ml) and 10% fetal bovine serum (Thermo Fisher, A2720803), and incubated at 37°C in a humidified chamber with 5% CO 2 and 95% O 2 . The cells were not allowed to grow to 100% confluency to avoid the loss of differentiation potential. To establish the myocardial I/R injury model in vitro, OGD was performed as previously described. 29,41 Cells were cultured in glucose-free and serum-free DMEM (Gibco,11966025) in an anaerobic environment with 1% O 2 , 5% CO 2 , and 94% N 2 at 37°C for 4 h. The cells were then replaced the glucose-free DMEM with standard culture media and subjected to normoxic conditions for reoxygenation for 3-12 h (Supplementary Fig. 1b). The H9c2 cells only applied in the experiment of the OGD period and poly (I:C) pretreatment concentration. 12 h before OGD, the cells were pretreated with different concentrations of poly(I:C) (0.1-20 μg ml −1 ) or vehicle without washing out. Cell viability was determined by Cell Counting Kit-8 (MedChemExpress, HY-K0301) following the manufacturer. HEK293 cells were incubated at 37°C in mediated DEME containing 10% fetal bovine serum in 5% CO 2 incubator. Plasmid pECMV-TLR3-m-FLAG (Hanbio, generated) was diluted in serumfree DMEM and mixed with polyethylenimine (PEI; Polysciences, 23966-2), administered to cells for 48 h. After that, the media was changed into complete culture media with poly(I:C) (100 μg ml −1 ), 1 h after poly(I:C) pretreatment, the cells were challenged with OGD performance according to previously mentioned. 42 Mouse adult cardiomyocytes were isolated as previously described. 43 After anesthesia, the heart was cut and immediately injected with EDTA buffer into right ventricular meanwhile descending aorta was clamped. Then heart was transferred to fresh EDTA buffer in a 60-mm dish and digested with prepared perfusion and digestion buffer by injection into left ventricular. The heart tissue was separated and pulled into 1-mm pieces, gently triturated to dissociated cells. The digestion was stopped by the addition of stop buffer and cell suspension was filtered. The calcium level was gradually restored by four rounds of gravity setting by using calcium reintroduction buffers. Then the cell pellet was resuspended with pre-warmed culturing medium and plated onto precoated culture plastic in a humidified culture incubator with 5% CO 2 at 37°C. The OGD time period of mouse adult cardiomyocytes were 1 h and reoxygenation for 2 h. Representative western blot (a) and average data (b) for PI3K, phospho-PI3K, Akt, phospho-Akt and p70 S6 kinase in ischemic myocardium with or without poly(I:C) preconditioning (n = 6). c, d Immunofluorescent colocalization (c) and quantification (d) of TLR3 and phospho-PI3K in mice subjected I/R with or without poly(I:C) preconditioning, scale bars, 50 μm (n = 6). e, f Immunofluorescent colocalization (e) and quantification (f) of TLR3 and phospho-Akt in mice subjected I/R with or without poly(I:C) preconditioning, scale bars, 50 μm (n = 6) (all experiment groups were compared via one-way ANOVA with a Bonferroni's multiple comparisons test, bars indicate the SEM, *P < 0.05; **P < 0.01). g, h Representative western blot (g) and average data (h) for PI3K, phospho-PI3K, Akt and phospho-Akt in tlr3 −/− mice subjected to I/R with or without poly(I:C) preconditioning (n = 6) (all experiment groups were compared with wild-type mice poly(I:C) pretreatment group via one-way ANOVA with a Dunnett's multiple comparisons test, bars indicate the SEM, *P < 0.05; **P < 0.01). p-PI3K, phosphorylated PI3K; p-Akt, phosphorylated Akt; p70 S6K, p70 S6 kinase Neonatal mouse cardiomyocytes (NMCMs) were isolated as described. 44 Hearts from 1 to 3 days old C57BL/6 mice were harvested and sheared into small pieces. Heart tissues were incubated in 5 ml Eppendorf tubes containing 2 ml digestion buffer (0.3 mg ml −1 collagenase II and 0.45 mg ml −1 pancreatin) 30 min at 37°C. Heart tissues roughly triturated and isolated cardiomyocytes were collected in tube containing medium to inhibit the enzyme activity. The procedures were repeated until heats were completely digested. Then isolated cells in the medium were filtered in 100 μm cell strainer, centrifuged 1.5 min at 1200 rpm and the pellet was resuspended in 2 ml medium. Isolated cells were cultured in culture medium containing 4 mM L-glutamine, 4.5 g l −1 glucose, penicillin/streptomycin (100 U ml −1 ) and 10% fetal bovine serum and incubated at 37°C in a humidified chamber with 5% CO 2 . Myocardial I/R The myocardial I/R procedure was performed with occlusion of the left anterior descending coronary artery (LAD) as previously employed. 24 Generally, mice were anesthetized with ketamine [120 mg kg −1 , intraperitoneally (i.p.)] and xylazine (4 mg kg −1 , i.p.). The depth of anesthesia was evaluated by corneal and withdrawal reflexes. Subjects were intubated and ventilated with 80% oxygen mixed with 20% carbon dioxide employing a rodent ventilator at a rate of 100-120 breaths/min and a tidal volume between 150 and 200 μl. Then, an incision in the skin and a left anterior thoracotomy through the 3,4 intercostal regions was performed. The LAD was carefully exposed and occluded with a 6-0 polypropylene suture and a small tube 2-3 mm from the tip of the left auricle. The tube was gently removed, and the suture was untied (onset of reperfusion) after 45-min period ischemia. The skin was closed, and the intratracheal tube was simultaneously removed. Sham mice underwent a thoracotomy, but the LAD was not occluded. Electrocardiogram was used to monitor heart rate, ensure the LAD was ligated successfully, as well as show any reperfusion-related wave changes within the process of reperfusion ( Supplementary Fig. 1c). Rectal temperature was monitored to maintain the body temperature between 36.5 and 37.5°C during the procedure. ELISA The blood from I/R mice was obtained by heparin-coated syringe and centrifuged 2000 rpm/min for 15 min. Supernatants were collected to quantify the cardiac functional markers troponin I (Tn-I) and N-terminal pro-brain natriuretic peptide (NT-proBNP) with mouse Tn-I ELISA kit (Bioswamp, MU30421) and mouse NT-proBNP ELISA kit (Bioswamp, MU30252) according to the protocol. Estimation of myocardial infarct size After 24 h reperfusion, animals were anesthetized as previously described and sacrificed with the LAD re-occluded, the blood was collected with a 1 ml syringe, then Evans blue (Sigma, E2129) was injected quickly into the left ventricular cavity. During the beating, the dye circulated and distributed equally through the whole heart. Then the left ventricle was harvested and rinsed with ice-cold phosphate buffer saline (PBS), frozen at −20°C for 30 min, cut into 2 mm-thick transverse slices, then incubated at 37°C for 20 min in 2% TTC (2,3,5-triphenyl tetrazolium chloride; Sigma, 17779) and 24 h in 4% paraformaldehyde (Biosharp, BL539A). The heart slices were photographed. Digital images were quantified, and risk and infarct sizes in two slides of each slice in cubic centimeter were calculated. The left ventricular size, area at risk size (AAR), and myocardial infarct sizes (MI) of each slice were summed to obtain to evaluate the whole heart infarct sizes. Infarct size was calculated as the percent of MI/AAR for any hearts. Echocardiographic assessment of left ventricular structure and function Animals were anesthetized as previously described before the examination. Echocardiography (Vivid7 Dimension, GE) was used to evaluate mice left ventricular geometry and function with twodimension guided M-mode. Various parameters were measured, including heart rate, LVIDd, and LVIDs. TUNEL staining The TUNEL immunofluorescence staining was applied to evaluate and analyze the level of apoptosis in the I/R cardiomyocytes. The animals were anesthetized and perfused with cold PBS and 4% paraformaldehyde (n = 6/group). The hearts were then sectioned into 20 mm-thick frozen slices. The staining was performed using an DNA Fragmentation Detection Kit (Millipore, QIA39-1EA) according to the protocols. The paraffin-embedded heart sections were first deparaffinized, permeabilized with 20 μg/ml proteinase K in 10 mM Tris-HCl for 15 min and blocked with 10% normal goat serum in PBS at room temperature for 1 h. The stained sections were photographed with a Fluo view FV10i confocal microscope. The nuclei were stained blue with DAPI, and the apoptotic cells were green for TUNEL positive. We counted the number of DAPI and TUNEL-positive cells. Each section was first separated into four quadrants, and the number of positive cells in each quadrant were counted, then averaged. The percentage of TUNEL positive cells was calculated with the formula: green/blue*100%. Histology After collecting blood and rinsing with cold PBS, the hearts (n = 6/ group) were sliced into five or six equal sections (5 mm). The specimens were fixed in 4% paraformaldehyde, enclosed in paraffin, stained with hematoxylin and eosin (Baso, BA-4025) according to the standard protocols, and examined by light microscopy for histology changes. A scoring system was used to evaluate the histological myocardial damage based on the modified scoring system described by Hu et al. 45 Simply, myocardial lesions contains interstitial edema, myofiber degeneration (including myofiber swelling and myofibrillar lysis), and subendocardial hemorrhage, as which were all graded according to their severity (0 = no lesion, 1 = mild, 2 = moderate, 3 = marked) and distribution (0 = no damage, 1 = focal damage, 2 = multifocal damage, 3 = diffuse damage). A mean score for each variable was determined for each heart, and a group means the score was calculated. Immunofluorescence staining Hearts were harvested, fixed, and sectioned into 5 mm-thick frozen slices (n = 3/group). Then the slides were then blocked with albumin bovine V (Gentihold,10735094001), washed again, and incubated with the anti-TLR3 antibody used at 1:20 dilution 1% BSA for 2 h at room temperature. Next, the slides were Fig. 4 Inflammatory response and apoptosis were inhibited in adult mouse cardiomyocytes and H9c2 cell lines by poly(I:C). a Isolated primary cardiomyocytes from adult mice. Blue, DAPI; red, phalloidin-stained microwire skeleton of cardiomyocytes, scale bars, 10 μm. b Expression of inflammatory cytokine CXCL1, CXCL2, IL-1β, and IL-6 as assessed by qRT-PCR in adult mouse cardiomyocytes subjected to OGD with poly(I:C) (n = 3). c, d Representative western blot (c) and average data (d) for Bax, Bcl-2, and cleaved caspase3 in adult mouse cardiomyocytes subjected to OGD with poly(I:C) (n = 3). e, f Representative microphotographs (e) and average data (f) for TUNEL-stained of primary neonatal mouse cardiomyocytes subjected to OGD with 12 h poly(I:C) preconditioning (n = 6). Blue, DAPI; green, TUNEL-positive cardiomyocytes; scale bars, 50 μm. g Expression of inflammatory cytokine CXCL1, CXCL2, IL-1β, and IL-6 in H9c2 cell lines subjected to OGD with poly(I:C) (n = 3). h, i Representative western blot (h) and average data (i) for Bax, Bcl-2 and cleaved caspase3 in H9c2 subjected to OGD with poly(I:C) (n = 3). j, k Representative microphotographs (j) and average data (k) for TUNEL assay of H9c2 cell lines subjected to OGD with 12 h poly(I:C) preconditioning (n = 6) (all experiment groups were compared via one-way ANOVA with a Bonferroni's multiple comparisons test, bars indicate the SEM, *P < 0.05; **P < 0.01). nOGD, non OGD washed and incubated in the dark for 45 min at room temperature with fluorescein-conjugated secondary antibodies (Invitrogen, A-11008) used at 1:500 dilution. Nuclear staining was done with DAPI (Beyotime, C1005) and myocardial fiber was stained with cardiac troponin T (Proteintech,11513-1-AP) for Quantitative real-time PCR Ischemic myocardial tissue was harvested quickly 24 h after reperfusion (n = 6/group). Total RNA was extracted from the cardiac tissue or cell cytokine using Trizol (Invitrogen, 15596026) according to the instruction. RNA was reverse-transcribed from 2 μg of total RNA by 200U of M-MuLV reverse transcriptase. Realtime PCR was carried out on a Bio-Rad iCycler in 96-well plates and performed with diluted cDNA using primers for tumor necrosis factor-α (TNF-α), interleukin-1β (IL-1β), interleukin-6 (IL-6), TLR3, the adapter protein TIR domain-containing adapter-inducing interferon (TRIF), interferon-beta (IFN-β), interferon-α (IFN-α), tolllike receptor 4 (TLR4), Myd88 (sense and antisense, see in Supplementary Table 1). The products were resolved on 1% ethidium bromide-stained agarose gel. A threshold cycle value (C T ) was estimated using the ΔΔC T method to quantify the assessment of gene expression. Transfection of Akt small interfering RNA (siRNA) The Akt siRNA and corresponding controls (Genepharma, generated) were mixed with TransMessenger Transfection Reagent (Qiagen, 301525) and administered to NMCMs according to protocol. After 4 h, the culturing media was replaced by normal culture media without antibiotics for 24 h. Then the cells were treated with poly(I:C) or vehicle for 12 h, after that, the cells were challenged with OGD performance (4 h of oxygen and glucose deprivation and 3 h of reoxygenation). Immunoprecipitation The tissue samples and cells were washed twice with ice-cold PBS and lysed with cell lysis buffer (10 mM HEPES pH 7.9, 10 mM KCl, 1.5 mM MgCl 2 , 50 mM NaF 1 mM Na 3 VO 4 , 1 μM PMSF plus a protease inhibitor cocktail). Protein samples (800 μg) were separated on 10% SDS gel, transferred to polyvinylidene difluoride membranes, and incubated at 4°C for 1 h with 2 μg antibodies to TLR3 (Abcam, ab62566) followed by the addition of 15 μl of protein G magnetic beads (Biorad, 161-4023). The precipitates were then washed four times with lysis buffer and subjected to immunoblotting (IB) with the antibody to p85 subunit PI3K (Cell Signaling Technology, 4257). Proximal ligation assay PLA is an antibody-based technique to determine whether two proteins are with 40 nm of each other. Proteins detected in this manner are identifiable by fluorescence. Heart slides were fixed and incubated using Duolink TM In Situ mouse/rabbit red starter kit (Sigma, DUO92101) according to protocols. Heart slides incubated with a blocking solution in a humidity chamber at 37°C for 30 min. After removing the blocking solution, the primary antibody anti-TLR3 and anti-PI3K were added to cover each section and incubated at 4°C overnight. Then, slides were washed twice with 5% bovine serum albumin and incubated with PLUS antibody and MINUS antibody at room temperature for 20 min. The secondary antibody mix was added and incubated at 37°C for 1 h. The ligation mix was added and incubated at 37°C for 30 min, then washed with 1× buffer A twice for 2 min. The amplification mix was added and incubated at 37°C for 100 min. Slides were washed with 1× buffer A and B twice for 10 min and 0.01× buffer B for 1 min. Then slides were mounted in mounting medium with DAPI. Finally, slides were photographed with a DAPI-filter and 43 HE-filter to identify the cells and PLA reaction using a fluorescence microscope and images were analyzed with Image J software. NF-κB activation NF-κB activation was quantified employing a p65 Transcription Factor Assay Kit (Abcam, ab133112). A nuclear extraction kit (Abcam, ab113474) was used first to extract the nuclear protein of H9C2. A 96-well plate was coated with a DNA binding sequence specific for the active form of NF-κB. Ten microlitre of nuclear extract were loaded into each designated well. After incubation and washing, the plate was incubated with antibody to p65 NF-κB. After another circle of incubation and washing, horseradish peroxidase conjugated secondary antibody was added to the wells, then developed for luminescence. Statistical analysis Investigators were blinded to medication during analyses. Data are shown as mean ± SEM. The presented data were represented from at least six separate tests. Statistical analysis was performed with GraphPad Prism 8.0. Group mean values (two groups) comparison were carried out by two-unpaired Student's t test. Multiple groups comparisons were performed using one-way ANOVA with the Tukey's, Dunnett's or Bonferroni's multiple comparisons test (details in Figure legends). P values ≤ 0.05 are considered statistically significant. Fig. 5 Inhibition of PI3K or transfection of Akt siRNA abolished poly(I:C)-induced cardioprotection following myocardial I/R injury. a Representative photographs of TTC-stained, perfused heart sections obtained from poly(I:C) with or without LY294002 pretreated mice subjected to I/R injury, scale bars, 1 mm. b Quantitative data of left ventricular infarct size (MI) and area at risk (AAR) in poly(I:C) with or without LY294002 pretreated mice (n = 6). c Representative transthoracic echocardiography of 24 h before and after I/R injury with or without LY294002 pretreated. d Average of data LVIDd, LVIDs, EF, and FS measured by echocardiography in poly(I:C) or vehicle pretreated mice subjected to I/R (n = 6) (all experiment groups were compared via one-way ANOVA with a Bonferroni's multiple comparisons test, bars indicate the SEM, *P < 0.05). e Expression of TLR3 and TLR4 as assessed by qRT-PCR in hearts subjected I/R after poly(I:C) preconditioning with or without LY294002 pretreatment (n = 6). f, g Representative western blot (f) and average data (g) for PI3K, phospho-PI3K, Akt, phospho-Akt, and p70 S6 kinase in hearts subjected to I/R with poly(I:C) and/or LY294002 pretreatment (n = 6) (all experiment groups were compared with poly(I:C) I/R group via one-way ANOVA with a Dunnett's multiple comparisons test, *P < 0.05; **P < 0.01). h, i Representative western blot (h) and average data (i) for protein level of phospho-PI3K, PI3K, phospho-Akt, and Akt in neonatal cardiomyocytes transfected with Akt siRNA and poly(I:C) before OGD (n = 6). j Cell viability was detected in neonatal cardiomyocytes transfected with Akt siRNA for 24 h and received OGD performance (n = 6) (all experimental groups were compared via one-way ANOVA with a Turkey's multiple comparisons test, bars indicate the SEM, *P < 0.05; **P < 0.01). LY, LY294002 Fig. 6 Poly (I:C) increased phosphorylation of TLR3 tyrosine residue and following recruitment of PI3K. a HEK 293 cells expressing Myc-tagged TLR3 and treated with poly(I:C). Cell lysates were immunoprecipitated with anti-phosphotyrosine (PY20) and western blotted with anti-TLR3. b Cell lysates were immunoprecipitated with anti-Myc and western blotted with anti-PI3K-p85-subunit. The same blot was reprobed with the TLR3 antibody as an immunoprecipitation control. c Mice were injected with poly(I:C), and heart tissues were harvested at 0, 15, 30, 120, and 240 min. Samples were immunoprecipitated with antiphosphotyrosine (PY20) and western blotted with anti-TLR3 (upper). Furthermore, samples were immunoprecipitated with anti-TLR3 and western blotted with anti-PI3K. The same blot was reprobed with the TLR3 antibody as immunoprecipitation control (below). d Mice were pretreated with or without poly(I:C) and subjected to I/R. Twenty-four hour after I/R heart tissues were taken, and samples were immunoprecipitated with anti-TLR3 and western blotted with anti-PI3K. The same blot was reprobed with TLR3 antibody as an immunoprecipitation control. e, f Representative photograph (e) and quantification (f) of PLA reaction between TLR3 and PI3K were shown (n = 6) (all experiment groups were compared via one-way ANOVA with a Turkey's multiple comparisons test, bars indicate the SEM, *P < 0.05; **P < 0.01). Scale bar (first line), 100 μm; scale bar (second line), 50 μm; scale bar (third line), 10 μm. g The schematic figure revealed poly(I:C) protects the hearts against I/R injury through phosphorylation of TLR3 tyrosine residue, activation of PI3K and recruitment of Akt, resulting in decreased of NF-κB activity, and reduced inflammatory and apoptotic responses DATA AVAILABILITY The data that support the findings of this study are available from the authors on reasonable request, see author contributions for specific data sets.
2020-11-06T14:37:03.027Z
2020-11-06T00:00:00.000
{ "year": 2020, "sha1": "3e1d509683dd03cb342a27b8879501a04485a229", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41392-020-00257-w.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3e1d509683dd03cb342a27b8879501a04485a229", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
253581466
pes2o/s2orc
v3-fos-license
A simplified lattice Boltzmann implementation of the quasi-static approximation in pipe flows under the presence of non-uniform magnetic fields We propose a single-step simplified lattice Boltzmann algorithm capable of performing magnetohydrodynamic (MHD) flow simulations in pipes for very small values of magnetic Reynolds numbers $R_m$. In some previous works, most lattice Boltzmann simulations are performed with values of $R_m$ close to the Reynolds numbers for flows in simplified rectangular geometries. One of the reasons is the limitation of some traditional lattice Boltzmann algorithms in dealing with situations involving very small magnetic diffusion time scales associated with most industrial applications in MHD, which require the use of the so-called quasi-static (QS) approximation. Another reason is related to the significant dependence that many boundary conditions methods for lattice Boltzmann have on the relaxation time parameter. In this work, to overcome the mentioned limitations, we introduce an improved simplified algorithm for velocity and magnetic fields which is able to directly solve the equations of the QS approximation, among other systems, without preconditioning procedures. In these algorithms, the effects of solid insulating boundaries are included by using an improved explicit immersed boundary algorithm, whose accuracy is not affected by the values of $R_m$. Some validations with classic benchmarks and the analysis of the energy balance in examples including uniform and non-uniform magnetic fields are shown in this work. Furthermore, a progressive transition between the scenario described by the QS approximation and the MHD canonical equations in pipe flows is visualized by studying the evolution of the magnetic energy balance in examples with unsteady flows. Magnetohydrodynamics (MHD) flows are found in nature and in industrial applications involving many conductive fluids and plasma flows. In most of industrial applications, for example, the magnetic Reynolds number R m is very often smaller than 10 −2 [1]. Simulations involving small values R m are usually performed by using the so-called quasi-static (QS) approximation, where the induced magnetic fluctuations are considered much smaller than the applied magnetic field [1][2][3]. The derivation of the QS approximation involves taking the limit of vanishing R m , which can introduce several challenges from the numerical point of view. One of the biggest difficulties is associated with the need of solutions for a separate evolution equation for the magnetic field, and another difficulty comes with the presence of a very small diffusion time scale. Due to these difficulties, many numerical works in MHD have been restricted to cases where the magnetic Prandtl number P r m is close to 1, i.e., where the magnetic and kinetic time scales are the same. This is also the case in many numerical works in the literature of the lattice Boltzmann methods (LBM) [4][5][6][7]. In [5,7], simulations with very small P r m are performed but only in the context of stationary flows. One of the main objectives in this article is to approach the equations of the QS regime by only using a lattice Boltzmann framework. More specifically, we aim to extend the simplified lattice Boltzmann models proposed in [4,8] for simulations of MHD flows involving curved boundaries with very small values of magnetic Reynolds numbers. In this analysis, we also intend to study the transition between the regime described by the canonical MHD equations and the regime characteristic of the QS approximation [2]. In our study, we manage to analyse not only the transition, but also regimes with R m 1, characteristic of industrial applications. In the original simplified single-step LBM [8], the straightforward introduction of the forcing terms does not take into consideration the lattice discrete effects, as pointed by [9,10] in some analogous simplified LBM models. Also, many simplified models have limitations with respect to the stability and accuracy for high values of relaxation times, the same limitation also appears in the classical LBM-BGK model [11,12], which can be seen as one of the main limitations of this model towards simulations with small values of R m . Another issue is associated with the dependence on the relaxation time parameter that some boundary conditions methods for LBM have, as pointed out by [13]. The influence of curved boundaries was not addressed by [4] in the context of MHD flows, and in the Ref. [5], the only simulation involving curved boundaries is performed with P r m = 1. many of the limitations of the previous lattice Boltzmann models by introducing an improved simplified LBM framework able to perform simulations of the QS approximation in flows with curved insulating boundaries up to P r m ∼ 10 −7 in the laminar regime. Not only that, by considering preconditioning procedures [5,7,[15][16][17], we also manage to perform some simulations with P r m > 1, a regime characterized by fast fluctuations of the magnetic fields, which require the use of more accurate numerical methods. In the LBM literature, to the best of our knowledge, only a few studies [18,19] analyzed MHD flows in this regime, showing accurate results up to P r m = 2. This article is organized as follows. In the first part, Section II, we describe the general MHD equations and its connections with the quasi-static approximations, enumerating some important differences between the two systems from the numerical point of view. In Section III, we briefly introduce the traditional lattice Boltzmann method. In the following, we discuss a recent simplified single-step LBM algorithm for MHD flows based on the research developed by [4,8]. In the Section IV, we describe the general structure of the verification of benchmarks and validations considered throughout the article. In Section V, the single-step algorithm undergoes to a series of improvements, where increase of stability and accuracy are proposed with a some numerical validations. In the same section, a viscosity-and resistivity-independent immersed boundary method (IBM) able to simulate flows in the quasi-static regime is proposed. In Section VI, we apply the improvements developed in the previous sections for MHD flows involving non-uniform magnetic fields. In Section VII, techniques for the simulation of regimes with P r m > 1 are developed with some numerical validations; and in Section VIII, we provide some conclusions and perspectives. II. MAGNETOHYDRODYNAMIC EQUATIONS AND THE QUASI-STATIC APPROXIMATION The equations describing magnetohydrodynamic phenomena are formed by a coupling between the continuity and the Navier-Stokes equations for describing the fluid motion, and the Maxwell's equations for electromagnetism as follows [1] ρ ∂u ∂t where u and B are the velocity and magnetic fields respectively, η is the magnetic resistivity and µ is the dynamic viscosity of the fluid. We denote by ν = µ/ρ the kinematic viscosity. For the sake of simplicity, in the rest of the article, we denote u ⊗ B = uB and B ⊗ u = Bu. The electric field E and the the electric current density J are approximated by Considering a system where U 0 is the characteristic velocity, B 0 is the characteristic magnetic intensity and L is the typical length scale. We have the following important dimensionless quantities which are respectively: the Reynolds number, the magnetic Reynolds number, the Hartman number and the magnetic Prandtl number. In our study, we are mainly interested in the situations where R m 1, characteristic of the QS approximation [1], in pipe flows as shown schematically in The following system holds in this regime [2] ρ ∂u ∂t ∇ · u = 0, This approximation does not involve the problems with very small magnetic diffusion time scales. The convection-diffusion equation (3) for a magnetic field is replaced by a Poisson equation (9). A first difficulty comes with these changes, which is the fact that usually the lattice Boltzmann methods are not constructed to solve such types of equations. Also, in many problems, the solutions of Poisson equations involve non-local methods, which can be a problem if the objective is to perform parallelized simulations. In the next sections, we aim to approach the system (7-10) by using a lattice Boltzmann framework. In this approach, the problems with the very different diffusive time scales are handled by considering the asymptotic properties of a simplified LBM solver for advection-diffusion equations in order to treat the Poisson equation (9). The influence of curved walls is included by using an explicit immersed boundary method whose accuracy is not significantly affected by the coefficients of viscosity and resistivity. We also discuss lattice Boltzmann implementations of system (1-4) for some simulations of pipe flows with P r m > 1. In the following sections, a detailed description of the described methods will be shown. The starting point of the lattice Boltzmann method is the connection between the Boltzmann equation and the classical hydrodynamics equations. The Boltzmann equation is an integrodifferential equation for the probability density function f (x, v, t) in six-dimensional space of a particle position x ∈ R 3 and momentum v ∈ R 3 given by where Q(f, f ) is collision integral, F ext is the body force, ρ is macroscopic mass density of the system, and ∇ x and ∇ v are gradients with respect to the position x and velocity v coordinates, respectively. It can be shown that the collision integral Q(f, f ) has at least five invariants [20], i.e., a set of functions ξ k , k = 1, 2, 3, 4, 5, satisfying which are ξ 1 = 1, (ξ 2 , ξ 3 , ξ 4 ) = v and ξ 5 = |v| 2 . A general collision invariant can be written as linear combinations of the functions ξ k . The invariants are associated to some important macroscopic quantities in the system, some of them are A set of conservation laws for each of these quantities can be obtained multiplying the Boltzmann equation (11) by a collision invariant and subsequently integrating with respect to the velocity. In the lattice Boltzmann method (LBM) the basic quantity is the discrete-velocity distribution function f i (x, t), it represents the density of particles with velocity c i at position x and time t. By discretizing the Boltzmann equation (11) in velocity space, physical space, and time, we obtain the discrete Boltzmann equation [11,12] where Ω i (x, t) is the discrete version of the collision integral in (11). This equation expresses that a particle f i (x, t) moves with velocity c i to the nearest neighbors after a time step δt, i.e., the grid spacing is giving by δx = |c i |δt. Analogously, the mass density and momentum density ρu at (x, t) can be found through weighted sums known as moments of f i as in a similar fashion to (13) and (14). The main difference between f i and the continuous distribution function f is that all of the argument variables of f i are discrete, with the subscript i referring to a finite discrete set of velocities c i as shown in Figure 2. The discrete collision integral Ω i is given by BGK operator defined as where the equilibrium distribution is given by where c s is the speed of sound given by c s = c/ √ 3 and w i are the lattice weights associated with the velocity scheme D3Q27 as shown in Table I. Using the BGK approximation in the equation (15), we obtain the lattice BGK equation The simplest way to initialize the populations at the initial time t = 0 is to set f i (x, t = 0) = f (eq) i (ρ(x, t = 0), u(x, t = 0)). The kinematic viscosity ν is connected to the relaxation time τ by the equation The BGK scheme is the most traditional LBM algorithm with many interesting applications, but it has well known limitations in terms of stability, memory requirements and some problems with appropriate boundary conditions methods for some types of complex multiphysics simulations [11,12]. In the next section, we discuss a recent approach that began with works developed by [8,21,22], later extended to MHD flows by [4], towards a simplified lattice Boltzmann method that does not involve the evolution of the non-equilibrium distributions. In this approach, a single-step algorithm is formulated giving a more efficient method in terms of memory requirements and stability in comparison with the traditional BKG algorithm (20), while keeping almost the same accuracy. B. Connection with hydrodynamic equations From (15), we can derive solutions for Navier-Stokes by first considering a 2nd-order Taylor series expansion in time and space given by where D i = ∂ ∂t + c i · ∇ denotes the material derivative. Up to a second order error, we have [23] ∂ ∂t Next, consider the Chapman-Enskog multiscale expansion [24], where ε is a small parameter proportional to the Knudsen number [12]. In this expansion, it is assumed that the diffusion time scale t 2 is much larger than the convective time scale t 1 , and that diffusion and convection act on the same spatial scale [20]. In similar fashion, the distribution function f i can be expanded about the local equilibrium distribution function f eq i as where f neq is the nonequbilibrium distribution, which is associated with viscous dissipation and verifies the following constraints called solvability conditions. Substituting (24) and (25) into (23) and combining the sequence of equations obtained up to order O(ε 2 ), we obtain the following system [11,12] N i=0 ∂f eq i ∂t with By using the moments (16), (17) and (26), it follows that the equations (27) and (28) can be turned into solutions for continuity and Navier-Stokes equations respectively [11]. C. Single-step lattice Boltzmann algorithm for the Navier-Stokes equations The equations (27) and (28) are the starting point of many simplified LBM algorithms [8,21,22]. Different discretization schemes for these equations produce different simplified algorithms. In this article, the starting point is the approach developed by [8], which will be described in the following with a slightly different derivation. Considering the finite differences we can rewrite (27) as Using (16), we arrive in the following algorithm For the momentum equation (28), the term c i · ∇f eq i is discretized in a different way as where we used the constraints in (26). Using (29), we have where we combined forwards and backwards finite differences for the operator ∂ ∂c i . Substituting (32), (34) and (36) into (28), and considering (17) it follows that The equations (33) and (37) constitute the single-step lattice Boltzmann algorithm [8]. It is important to observe that these formulas depend only on the equilibrium distributions, which are only associated with the macroscopic quantities of the system. This feature reduces significantly the memory requirements in comparison to the traditional BGK algorithm, and also simplifies the implementation of boundary conditions, as we no longer have to deal with complicated manipulations of non-equilibrium distributions at the boundaries. In the next section, we consider a similar development in the context of the advection-diffusion equation (3) for the canonical MHD system. D. Single-step simplified LBM algorithm for the magnetic fields equations In [25], Dellar derived an extension of the lattice BKG scheme (20) that solves the advectiondiffusion equation (3) for the magnetic field. This work also presents, in a similar fashion, the following algorithm which solves, for example, the x-components of the magnetic field as The relationship between resistivity η and the relaxation parameter τ m is given by where c s is the corresponding speed of sound. An analogous equilibrium distribution is defined as In the work [4], the authors introduced a single-step (or one-stage) simplified LBM algorithm for (3) following the same steps of [8], as we describe as follows. The lattice Boltzmann equation (LBE) can be written as By applying a Taylor series expansion at the left-hand side of (42) followed by a Chapman-Enskog expansion up to second order, it is possible to write the following equation with Now consider the following finite differences schemes So it follows that and then, Analogously, the algorithm is only a function of the equilibrium distribution given by (41). This algorithm is also usually much more stable then the traditional form (38). E. Summary of the one-stage simplified LBM algorithm for MDH flows Considering the following expressions for the equilibrium distributions: We have the following single-step (or one-stage) LBM algorithm for MHD flows and g eq i = [g eq ix , g eq iy , g eq iz ]. External forcing terms F ext are usually added in a straightforward way as forward by just assigning the desired values to the boundary points, some other types of boundary conditions are also implemented very similarly to conventional MHD solvers. To the best of our knowledge, no studies of the single-step LBM algorithm have been conducted in the context of MHD flows involving curved boundaries. The success of the use of the immersed boundary methods [11] in some previous lattice Boltzmann models [13,21,26] indicates an interesting direction for the inclusion of curved boundaries in simulations of MHD flows. It important to observe that the inclusion of the forcing terms by using (55) does not consider the so-called lattice discrete effects [9], associated to the correct consideration of contribution of the fording term Fext ρ · ∇ v f in the equation (11). This limitation can compromise the accuracy of the simulations, especially in the cases involving non-uniform or unsteady forcing terms. Another limitation of the algorithm (54) is associated with the loss of stability and accuracy for high values of relaxation times. Considering δt = δx = 1, the simulations become easily unstable for values of relaxation times τ > 0.5 and τ m > 0.5, a similar limitation is also shared by other simplified methods. In our work, one of the main objectives is the to solve the quasi-static approximation in MHD, and for this objective is necessary to consider high values of resistivity η which usually implies in very high values of τ m . In the next sections, we address all of the mentioned limitations. We first consider an implementation of forcing scheme that takes into consideration the effects of variable forcing terms in a more accurate way. Next, we consider extensions the simplified LBM algorithms for regimes of high values of relaxation times. In the final part, we introduce explicit immersed boundary algorithms for simulations of flows involving curved boundaries and whose accuracy is independent of the values of resistivity and viscosity coefficients. IV. VALIDATIONS AND BENCHMARKS In the next sections, we introduce some improvements in the simplified single-step algorithm For examples involving stationary flows under the presence of a uniform magnetic field with insulating walls, as represented in Figure 1, we compare the numerical solutions with the analytical solution derived by Richard R. Gold [27] for a pipe flow submitted to a constant transverse magnetic field. The Gold's solutions for the streamwise components of velocity and magnetic fields of the system (1-4) are given by where α = Ha/2, n equal 1 for n = 0 and 2 for n > 0. I n is the modified Bessel function of the first kind of order n and I n is the respective derivative. The Hartman number, in the context of the experiments of this article, is defined as where B 0 is the characteristic magnetic field intensity and R is the pipe radius. We also study the effects of non-stationary and transients flows by analysing the evolution of magnetic energy E m = 1 2 |B| 2 and the kinetic energy E k = 1 2 ρ|u| 2 (per unit of volume), where · denotes spatial averages within a cylinder with radius smaller than the radius of the pipe. The respective variations are given by [1] The energy budget in (58) is analysed for constant and variable forcing term. For the study of unsteady forcing terms, we analyse the effects of a variable pressure difference defined as follows where F 0 is a reference force intensity and T is the period. In order to be able to verify the Gold's solutions, we first need to introduce a set of improvements in the previous single-step algorithm given by (54) For the proper consideration of the forcing terms in (11) in the simplified single-step algorithm (55), we consider the introduction of a consistent forcing scheme that takes into consideration the discrete effects at the level of distribution functions, similar to the developments in [9,10]. In this section, we include the GZS forcing scheme [28] into the algorithm (54). The BGK algorithm with the GZS scheme is expressed as where As pointed out by [9], the application of the Chapman-Enskog expansion analysis in (60) gives rise to the following expression where this time Follows that Substituting (66) into (64), we obtain The extra term c i (c i · ∇)τ δtF i is associated with the lattice discrete effects that only appears for variable forcing terms. For this term, the following discretization based on isotropic finite differences [29] can be considered Therefore, the single-step algorithm for the velocity (55) should be rewritten as With this improvement, it is possible to simulate more accurately multiple forms of external force interactions, including space-and time-dependent body forces, such as the Lorentz force [1] where the curl can be calculated by using isotropic finite differences [29]. Effects of magnetic fields can also be introduced by changing the equilibrium distribution [4] in such a way that the divergence of the Maxwell stress tensor is implemented [4,6]. This approach have not shown stable results in our numerical experiments for the case of non-uniform magnetic fields in simulations involving very small R m . For this reason, in this article the forcing term approach is considered in all of the numerical experiments. B. Boundary condition-enforced IBM In this section, in order to introduce the effects of curved boundaries in MHD flows, we consider the immersed boundary method (IBM). In this method a fixed Eulerian mesh is applied in which the flow field is resolved, while the immersed solid boundary is described by a set of discrete Lagrangian points distributed in the fluid domain. The flow variables resolved on the Eulerian mesh are corrected by a restoration force exerted from the solid boundary [11]. In this article, we consider velocity and magnetic field corrections given by an extension of the boundary conditionenforced IBM based on the developments in [26], as we describe below. In most of the IBM, the introduction of the effects of the boundaries is given by predictorcorrection algorithm. In the predictor step, the LBM algorithm solves the following general system without boundary effects The effects of the boundaries are imposed as an extra forcing term introduced in the corrector step as where f is determined by the IBMs to reproduce the effects of the immersed objects. Since the forcing term f is not considered in the prediction step, the intermediate velocity u * obtained in the predictor step must be corrected. The corrector step (73) is discretized as where δu is the velocity correction. The corrected velocity is given by In order to calculate the corrections, interpolations between the Lagrangian and the Eulerian meshes are usually made using the discrete delta functions [11]. In this article, we consider a different approach for the interpolation procedure, which was suggested by [30] in the context of 2D flows. In this work, the authors showed that the use of Lagrange polynomials, instead of numerical delta functions, gives significantly better results in terms of accuracy. More specifically, the velocity correction δu(x i ) at Eulerian mesh cell i is distributed from the velocity corrections δU(X j ) at Lagrangian points X j by [30] using classical Lagrangian interpolation schemes given by and where N is the total number of Lagrangian points and D is accounts for a Lagrange velocity polynomial interpolation written as where, for the purposes of this article, the coefficients are given by and analogously for D y (r y ) and D z (r z ). The use higher order Lagrange polynomials is possible [30], but in the experiments of this article no significant differences were found by using them. Analogously, the velocity U b (X j ) at the Lagrangian point X j can be interpolated from the corrected velocity u at the Eulerian mesh points by using where S(j) is the set of neighboring Eulerian cells near the Lagrangian point X j defined as where h is the grid spacing in the Eulerian mesh, which in this article is set to the unity without loss of generality. Substituting (76) and (75) into (80), we obtain the following equation where δU(X j ) is an unknown velocity correction, U b is an imposed velocity on the immersed boundary points and u * is known from the predictor step. In a matrix form the relation (82) is given by where where M is the total number of Eulerian points the sets S(j), j = 1, · · ·, N . The velocity correction δU is obtained by solving the system where A = DD T ∈ R N × R N and b = U b − Du * . The corresponding corrected velocity at the Eulerians nodes is given by It is important to mention that the matrices D and D T are easily obtained but the inversion of a matrix A can be a non-trivial procedure. In the next, based on the developments in [26], we discuss an explicit strategy to solve the problem (86) which does not involve the direct inversion of the matrix A. C. Explicit boundary condition-enforced IBM In a more explicit way, the system (86) is given by where Note that we only need to consider the non-zero values of the coefficients A ij , i.e., in the summation in (88) we only need consider j ∈ {A ij = 0}. The momentum correction is then linearized in the vicinity of X i in the following form where dX ij = X j − X i . Assuming that the curvature of the immersed boundary is small in such a way it can be approximated by a straight wall in the vicinity of X i [13,26], it follows that as a consequence of the properties of the interpolating function (79). Substituting (90) and (91) into (88), we have up to a second order error. Now note that the unknown correction δU(X i ) can now be moved out of the summation, which leads to the simplified system [26] δU(X i ) or in a matrix form where where N is the number of the immersed boundary points. Substituting the solutions of (94) into (87), follows that the corrected velocities in the Eulerian nodes is given by An interesting feature of this method is that it avoids the direct inversion of the matrix A in (86), which can be computationally expensive, specially if moving boundaries are involved, which requires the inversion of A repeatedly. The explicit character of (96) also simplifies the implementation of the method on GPUs. where Q denotes a general source term. In this method, the boundary effects are imposed as an extra source term introduced in following corrector step where q is determined by the IBMs to include the effects of the magnetic fields generated by immersed objects. The corrector-step is discretized as and the corresponding corrected magnetic field will be given by where B * is the magnetic field obtained in the predictor step (97). Following the same steps as in the case involving the velocity field, it follows that where B b is the imposed magnetic field on the immersed boundary points. with size n x × n y × n z = 5 × 80 × 80. The immersed boundary is approximated by a cylinder formed by small rectangular (almost squared) elements, as shown in Figure 3(b). The number of elements is chosen in such a way that each element has an area close to (δx) 2 , which is a common criterion for IB methods [11]. It is possible to see a significant mismatch in the comparisons between the numerical solutions for U x and B x and the Gold's solutions (56) and (57). A similar mismatch also appears in the quasi-static regime as shown in Figure 5, where a simulation with P r m = 4 × 10 −7 and Ha = 18 with the same computational grid size is performed using some methods to be described in the next sections. All this suggests that the accuracy of the corrections given by (96) and (102) have some dependence with respect to the coefficients of viscosity and resistivity. It implies that for the simulations of the quasi-static approximation characterized by P r m 1, some improvements are needed. Strategies for the solution of this problem will be described in the next subsections. E. Viscosity-independent boundary condition-enforced IBM In this section, we extend the previous results for IBM developed for the case where we present arbitrary magnetic Reynolds number. In [13], the authors suggested that the complete description of an immersed boundary problem also involves the inclusion of non-dimensional IB force. More specifically, in any physical configuration, the flow solution can be described by a set of nondimensional physical quantities, as the non-dimensional pressure and velocity where U r , p r and ρ r are velocity, pressure and density of reference, respectively. In addition, a non-dimensional IB force is defined as Consider two sets of dimensional quantities (ρ 1 , u 1 , f 1 ) and (ρ 2 , u 2 , f 2 ), which we call systems 1 and 2 respectively. Let us also consider that the reference densities and characteristic lengths are the same, i.e., ρ 1 = ρ 2 = ρ r (small Mach numbers assumption) and L 1 = L 2 . In this situation, if the both systems are solutions of the same physical problem, then the sets 1 and 2 results in the same set of non-dimensional quantities, which in our case implies in the same Reynolds, same Mach and same Froude numbers. In this case, denoting the reference velocities of the systems 1 and 2 by U 1 and U 2 respectively, it follows that the two systems are connected by the scaling factor defined as λ = U 2 /U 1 , which is also the viscosity ratio between configurations 1 and 2, i.e., λ = ν 2 /ν 1 . As a consequence, the following scaling laws are verified The IB forces can be rewritten as which leads to the the following equation Comparing (105) and (107), we can observe that despite the fact that the physical quantities (105) exhibit self-similar scaling properties, the velocities corrected by the IBM cannot be directly rescaled using λ, because u * 1 has a dependence on u 2 . This property is one of the possible causes of the error shown in Figure 4. In the following, we describe the proper corrections that should be considered in order to introduce the correct IB adjustments. Let us denote the Lagrangian velocity corrections given by (96) for the systems 1 and 2 as δU 1 and δU 2 respectively. The scaling verified in the Eulerian nodes should also be verified in the Lagrangian nodes, i.e., δU 2 = λ 2 δU 1 . Let us consider that the system 1 is a reference configuration that does not need scaling corrections. Using (86) it follows that As we already mentioned, the matrix DD T is usually ill-conditioned and its inversion is a nontrivial procedure, requiring some special techniques in order to approximate the inversion of DD T . Let us consider, without loss of generality, the least square solution of (108) written in terms of the pseudoinverse (DD T ) † with the representation formula given by and then Using (96), we obtain and finally, the IB force verifying the correct scaling properties will be given by where in the last equation we consider some general properties of pseudo-inverse matrices [31]. It is important to observe that the term DD † does not have to be the general identity matrix I. Depending on the immersed boundary method, we may cancel the coefficient λ, but for some explicit velocity correction-based IBM, as the one described in this article, that is not the case. Due to the properties of the interpolating functions (79), it follows that we can use power series and show that one first approximation for D † is given by D T [31][32][33]. Using again (94) and considering D † D T , we obtain Consequently, we can rewrite (113) as where the term (DD T ) † (U b,2 − Du * 2 ) in (115) corresponds to the previous velocity correction obtained by finding the least-square solution of the system (86). It is interesting to note that the form of the scalings in the matrix in the equation (115) is very similar to the scalings obtained in [13] in the context of the direct forcing IBM, with the difference that in our work we found a matrix of scalings rather than a single scaling. Then, substituting (115) into (112) and using (94), it follows that the new corrected velocity, considering the necessary scaling corrections, is be given by In the next subsection, we consider the introduction of similar corrections in the context of the explicit boundary condition-enforced IBM for the magnetic field equations. F. Resistivity-independent boundary condition-enforced IBM In this subsection, for the explicit IBM for the magnetic field described in the Subsection V D, we consider a procedure analogous to the case involving the velocity field. In this case, the two non-dimensional important physical parameters in this case are Consider two sets of dimensional quantities (u 1 , B 1 ) and (u 2 , B 2 ), which we also call systems 1 and 2 respectively. We also assume that the both sets are associated with the same physical system, which implies in the same set of non-dimensional quantities. The corresponding scaling factor will be given by λ mag = η 2 /η 1 , which leads to the following relationships and similarly where B * 1 and B * 2 are magnetic fields obtained in the predictor step (97). The equation for the corrected magnetic field B 2 is then given by where B 2,b is the imposed magnetic field on the immersed boundary points associated to the system configuration 2. G. Stability improvements for high values of viscosity and resistivity In this section, we aim to extend range of stability of the previous simplified methods for regimes cassociated with high values of relaxation times. The main idea is first to set the relaxation time τ = 1 [14,34] in the classical BGK algorithm (15) obtaining the so-called macroscopic lattice Boltzmann model given simply by It is possible to show that the particle speed c can be changed in such a way to include the effects of different viscosities ν as where δt = δx/c. In our applications, for the sake of simplicity, we always consider δx = 1. Accordingly, the change in the particle speed c also implies in the following changes in the lattice velocities of the D3Q27 scheme as The algorithm formed by (121) and (122) is particularly efficient and stable for flow simulations with small and moderate Reynolds numbers. High Reynolds numbers usually will require a very small δx, which implies in a substantial increase of the number of points in the computational grid. In this article, this algorithm is suggested as an extension for τ ≥ 1 of the single-step algorithm (54). Actually, it can be considered an extension for any other simplified method that also have problems for high values of relaxation times. In this article, we also extend the idea of the macroscopic LBM algorithm for the magnetic field equations (3) and (4). Substituting τ m = 1 in (38), we obtain the following algorithm Recall the formula for the resistivity η as a function of the relaxation time τ m given by Introducing τ m = 1 in (128), we obtain and considering δx = 1, we have η = c/6. The algorithm given by (127) solves (3) and (4) for a wide range of η values, but similarly to the algorithm given by (121) and (122) for the velocity field, this algorithm is not practical for small values of resistivity, but is very suitable for the values of resistivity associated with the quasi-static approximation (7). The idea in this article is to set the δx = 1 in (129) (velocity and magnetic fields are solved in the same computational grid) and obtain δt = 1/6η. It implies that if η > 1/6, then the algorithm (127) should be iterated a few times before every update of the single-step algorithm given by (54) and (55) for the momentum equation. The number of iterations N mag for the algorithm (127) can be defined as where the function · denotes the smallest integer number greater or equal to 1/δt. In many applications, very high values of resistivity generate a prohibitive value of N mag , but in this situations we can work with some kind of effective number of iterations, as we shown in details in the Section V H. The result of this strategies is shown in Figures 6 and 7. For a significant high values of η, the algorithm given by (127) converges to the equation as a natural asymptotic limit. A verification of the proposed single-step algorithm is shown in In the references [5,7], the authors consider the introduction of extra parameters χ and γ and use the traditional BGK algorithm (38) to solve the following equation which has a stationary solution given by The parameter χ can be set to archive the desired magnetic Prandtl number P r m and the parameter γ, usually much smaller then 1, helps to increases the convergence rate to steady state solutions. The same strategy can also be applied for the single-step algorithm (54) as well. Originally in [5], this procedure is mostly considered for steady states solutions, but its applicability for general flow prohibitive for numerical purposes. In this subsection, we shown some strategies for the solution of this problems. We first consider a small modification in the equilibrium distributions given by (51), (52) and (53) as follows. For c 1 = [0, 0, 0], let us introduce an extra coefficient α as and for the other velocities i = 2, · · ·, N , consider with the following small modification in the algorithm (127) given by By using the Chapman-Enskog multiscale expansion, it is possible to show that the algorithm (136) solves the following equation In order to calculate more accurately the dependence of the errors with respect to the resistivity with a fixed the number of iterations, we consider the following expression for the residual where the differential operators are calculated by using isotropic finite difference schemes [29]. The residual is normalized by the initial residual, i.e., the residual at the first iteration. In Figure 10, VI. EFFECTS OF NON-UNIFORM MAGNETIC FIELDS Most of the LBM simulations of MHD flows only considers the influence of uniform transversal magnetic fields. In this sections, we test the proposed algorithms developed in the previous sections in problems involving an external non-uniform magnetic field, as for example, the field given by where (x, y, z) is a points in the fluid domain. These fields are obtained by using the Biot-Savart law [2], where L is the width of the slab's rectangular cross section, which we assume to have aspect ratio 2. We consider L = R/6. The magnetic field lines generated by (140) and (141) in the yzplane are shown in Figure 11(a); and in Figure 11 In Figure (12), we show a simulation of a MHD flow in a circular pipe based on the schematic representation shown in Figure 11(b), with viscosity ν = 0.04, resistivity η = 1000, Hartman number Ha = 20 and pipe radius r = 40 in a computational grid with size n x ×n y ×n z = 5×83×83 . Periodic boundary conditions are considered in the streamwise direction and a constant body force ∂p ∂x = −2.16 × 10 −5 is imposed. In the Figures 12(a) and 12(b), we can observe the contour lines for configuration presented in Figure 11(b). The respective verification of the energy balance (58) is shown in Figure 12(c). The modification of the equilibrium distributions in order to implement the divergence of the Maxwell stress tensor [4], rather than the direct implementation of the Lorentz force, has not shown stable results for the cases involving non-uniform magnetic fields, indicating that for the algorithms presented in this article, the forcing term approach given by (69) In (a) and (b) we show some level curves for the velocity and magnetic fields, respectively. We can see that the velocity and magnetic field profiles verify the expected symmetries associated with the system (7-10). In (c) we show the verification of the energy balance equations given by (58). VII. SIMULATIONS WITH MAGNETIC PRANDTL NUMBER P r m > 1 In all of the previous discussions, we concentrate our analysis in regimes with P r m ≤ 1. In this section, we analyse the results of the single-step simplified algorithms proposed in this article for the case P r m > 1. This regime usually requires more accuracy of the numerical methods in space and time. The few LBM results in the LBM literature [18,19] about this regime are performed up to P r m = 2 by using more robust numerical schemes, such as the central-moments-based LBM in simulations with flat boundaries. In Figure 13(a), we can see a significant mismatch between numerical and analytical solution by using (55) with (127) in a simulation with P r m = 4, despite the improvements introduced in the previous sections. In order to solve this problem we consider a strategy based in the introduction of a smaller time steps. Most of the simplified LBM methods are constructed considering δt = δx, which restricts the possibilities of the changes of δt to some particular grid configurations. In order to avoid this limitation, we consider a set of rescaled variables, indicated by overlines, associated with an extra = ν γc 2 s δt and τ m = η c 2 s δt which are defined in such a way to keep the viscosities and resistivities unchanged by the transformations (142) and (143). If we consider the inclusion of the FGS forcing term (61) in the simplified algorithm, then the introduction of (143) also leads to with τ given by (148). All of these modifications provide essentially the same result as those obtained using the preconditioning procedures described in [7,[15][16][17]. The same equations can also be found by using the strategy of the adaptive time step (ATS) developed in [35], with the exception of the treatment of the nonequlibrium terms. If we consider we consider γ > 1, we essentially decrease of the effective time step by a factor of 1/γ. Consequently, we obtain a significant improvement of the accuracy with minimum changes in the original single-step algorithm. Naturally, γ cannot be changed arbitrarily, if γ is too small, some transient phenomena with typical small time scales may be missed, and if γ is too large, simulations may have an excessively slow convergence rate with some possible loss of accuracy, due to the fact that the scalings can make the relaxation times to close to the value 0.5 if γ 1. In the Figures 13 and 14, we performed some MHD pipe flow simulations with ν = 0.08, η = 0.02, Ha = 18 and pipe radius r = 22. A constant body force with ∂p ∂x = −2.38 × 10 −4 is applied. The computational grid size considered is n x × n y × n z = 5 × 50 × 50. In Figures 13(b) and 14(b), we show the velocity and magnetic field statistics associated with the scaling γ = 4; and in Figures 13(a) and 14(a), we present the statistics generated by using γ = 1. In the Figure 14, it is possible to see that the solutions were essentially rescaled in time by a factor of γ. Not only that, we can also observe a significant improvement of the accuracy in space (verification of the Gold's solutions) and time (correct verification of the energy balance). VIII. CONCLUSIONS In this article, we provide a set of extensions and improvements in a class of simplified LBM algorithms with the objective to simulate MHD flows with very small magnetic Reynolds numbers in pipe flows. We also introduce a immersed boundary method able to accurately include the effects curved insulating walls in the MHD equations and whose accuracy is not significantly dependent on the values of the relaxation times. Improvements in the implementation of forcing term allows an accurate and stable implementation of variable forcing terms, showing good results even in the presence of strongly non-uniform magnetic fields. With this set of improvements, in the present work we provide a completely local and explicit LBM framework for simulations of the quasistatic approximation in pipe flows, with a good potential for simulations involving more complex geometries. By considering an adaptive time step strategy, we were able to increase the precision of the single-step LBM algorithm in space and time with minimal changes in the general form of the algorithm, extending the applicability of the method to some regimes up to P r m = 4, which have not yet been analyzed in the LBM literature. It is also important to mention that results introduced in this article can be extend as well to some other simplified lattice Boltzmann models [14,21,22,34]. As future works and suggestions, further verification of the proposed methods for turbulent flows and extensions for the cases involving conducting curved walls are natural future directions for this research, as well as systematic comparisons with similar solutions provided by other numerical methods. Also, the use of of more robust LBM schemes such as MRT (multiple-relaxation-time) and central-moments-based schemes for the magnetic field equations can be interesting options towards the same objectives of this article, with some possible improvements in terms of accuracy [36].
2022-11-18T06:42:50.168Z
2022-11-17T00:00:00.000
{ "year": 2022, "sha1": "25df4b6dbf58ad49805c53746ca0501abf0cd987", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "25df4b6dbf58ad49805c53746ca0501abf0cd987", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
54762550
pes2o/s2orc
v3-fos-license
Capital budgeting practices by non-financial companies listed on Kuwait Stock Exchange (KSE) Abstract Purpose—The purpose of this study is to investigate various aspects of capital budgeting techniques adopted by Kuwaiti non-financial companies listed on the Kuwait Stock Exchange (KSE). Design/methodology/approach—A questionnaire is used to collect data from Chief Executive Officers (CEOs), Chief Financial Officers (CFOs) and other managers of manufacturing, service and real estate companies listed on the KSE. Findings—The result of the analysis unveiled that top management and people who used the assets are the main sources of capital budgeting ideas. The analysis also unveiled that net present value and profitability index are the most frequently used capital budgeting techniques and the choice of the technique is determined by the nature of the project under assessment, and the academic and professional capabilities of corporate staff. The analysis further demonstrated that factors such as uncertainty about the outcome of the capital budgeting techniques and lack of required data and information to use capital budgeting techniques could prevent Kuwaiti non-financial companies from adopting capital budgeting techniques. Finally, the analysis disclosed that non-financial factors such as strategic planning, corporate image, employees’ capabilities and environment protection are taken into consideration when making capital budgeting decisions. Practical implications—Kuwaiti companies either possess technology or have the required resources to install advanced technology to assist them in employing sophisticated capital budgeting techniques that take into account inflation and risk. This would ensure more accurate results and minimize uncertainty about the outcome of the capital budgeting decisions. Originality/value—This study is based on primary data collected directly from non-financial companies listed on the KSE. Introduction Capital budgeting is a planning mechanism used by an organization to make evaluation decisions on how to allocate resources among investment projects. Capital budgeting techniques assist in identifying a project feasibility. The importance of capital budgeting stems from the fact that it www.videleaf.com creates measurability and emphasizes accountability (Chartered Professional Accountants-Canada, [1]). Investing in the wrong project means committing corporate resources to a project without taking into consideration its risks and returns, thereby negatively affecting shareholders' wealth [2]. In addition, failure to appraise various capital budgeting projects effectively will negatively affect corporate competitiveness and this would jeopardize its survival [3][4][5]. According to the International Federations of Accounts [6], to maintain a strong economy and ensure sustainable economic growth, it is important to adopt a systematic, analytical and thorough investment appraisal approach together with sound judgment. Hence, capital budgeting has been a subject of growing theoretical and empirical investigations in the finance literature. The central issue in this literature is to explore the most frequently used techniques and the reason behind using some techniques more frequently than others (see e.g., Arkovics, [6]; Block, [7]; Gitman & Forrester, [8]; Ryan & Ryan, [9]). Empirical research provided inconclusive evidence regarding the capital budgeting practices among users; while several studies showed the payback period (PP) as the most popular technique employed in evaluating projects in developing countries, other studies demonstrated that Discounted Cash Methods (DCM) such as the Net Present Value (NPV) and Internal Rate of Return (IRR) are the most frequently practiced capital budgeting techniques. These findings, however, are questionable in countries such as the Gulf Co-operation Council (GCC) where their governments possess control over major economic activities and companies, as well as exempting investors from paying tax. Given that companies and investors are exempted from paying income tax, tax on dividends or capital gains, this would impact the capital budgeting process adopted by Kuwaiti companies where the government provides all necessary infrastructure to businesses in Kuwait and the country has enough liquidity to facilitate making capital budgeting decisions. The purpose of this study is to provide empirical evidence about the current capital budgeting practices being used by nonfinancial listed firms in Kuwait Stock Exchange (KSE). The focus will be on non-financial institutions since they maintain www.videleaf.com homogeneous sample and this avoids difficulties in controlling unsystematic impact. Moreover, financial institutions are mainly concerned with solvency and liquidity and their liabilities are mostly short-term in nature, payable on demand, have few fixed costs and lower operating leverage than other non-financial institutions. The global financial crisis, oil price volatility and the significant progress in information technology have posed new challenges for corporate financial management in general and investment decision-makers in particular. Consequently, companies are expected to adjust their investment appraisal approaches. Hence, there is a need to examine capital budgeting practices. This study is a departure from previous studies as it covers different aspects of investment appraisal techniques employed by companies operating in different sectors of the economy. The focus of most previous research was mainly on the factors influencing the choice of using a certain investment appraisal technique or identifying the frequently employed techniques in one or two economic sectors. In addition, most of the previous studies targeted EFOs (Chief Financial Officer), while this study targeted the person responsible for making investment decisions in the company. Hence, the current study is expected to make an important contribution benefiting both practitioners and academicians. For the former, it would help to make appropriate investment decisions by using the right capital budgeting appraisal technique. For the latter, it provides insights about the use of real options and enables them to identify problems faced by practitioners. This would assist them in conducting further research and updating the curriculum adopted by business schools operating in Kuwait and the neighbouring countries who share with Kuwait its level of economic development, nature of businesses and other economic activities. In addition, the current study is undertaken in a country with unique features where the government exercises control over economic activities. It provides an advanced infrastructure and offers financial support to the national businesses. The country has surplus of financial resources and businesses do not face problems in securing external funding to start new projects or to finance future growth. The unique features of Kuwait would affect different aspects of capital budgeting techniques adopted by the national non-financial companies. Hence, the findings of this study are expected to add a new dimension to the finance literature and contribute to the limited body of empirical studies about the capital budgeting appraisal techniques in the GCC region. The remainder of the study is organized as follows. A brief review of related literature is offered in the following section. Data collection and methodology are discussed in section three. While the findings are explained in section four, the conclusion is presented in the final section. Studies Investigating Investment Appraisal Techniques Most Frequently used in Practice Velez and Nieto [20] surveyed capital budgeting practices adopted by Colombian firms. They found investors using discounted methods for investment decisions as they suit the Colombian economy that experiences a considerably higher rate of inflation and the country's monetary policy, which was designed to restrict borrowing. Similarly, Jog and Srivastava [11] used a survey to identify the use of capital budgeting practices by large Canadian companies. They found the discounted cash flow (DCF) methods to be the most frequently used techniques by the surveyed companies. They noticed a high use of subjectivity and judgment in the estimation of inputs into the capital budgeting process. Pike [26] showed that British companies employ more than one method of investment appraisal. He added that the majority of investors used PP method and there is a growing interest in employing IRR and NPV. Arnold and Hatzopoulos [27] also examined the capital budgeting techniques employed by UK companies and revealed that the majority of these companies have increasingly adopted advanced methods to enhance their decision making in their evaluation of new projects. Babu and Sharma [40] surveyed firms in and around Delhi and Chandigarh in India. They found that more than 90 per cent of the firms adopted capital budgeting methods and the majority of them used DCF methods. They also found that the popular investment appraisal methods are the IRR and the PP used either individually or jointly. An additional study undertaken in India by Arora [39] observed that most of the surveyed firms prefer the discounted PP method and consider it as the most important capital budgeting technique. Arora provided evidence that major firms in India are utilizing many of the tools of analysis presented in the financial theory for analysing capital budgeting. In a more recent study, Batra and Verma [41] investigated capital budgeting practices in a sample of 77 Indian companies listed on Bombay Stock Exchange. The www.videleaf.com researchers found that the surveyed companies adopt capital budgeting practices described by academic theories. They also found that DCF flows methods (NPV and IRR) together with risk adjusted sensitivity analysis to be the most frequently used investment appraisal techniques by the surveyed companies. In a similar line of research, Hogaboam and Shook [18] observed that firms prefer the use of DCF techniques, particularly NPV method. They provided evidence that some big firms employ more sophisticated evaluation methods. Apap and Masson [19] surveyed publicly traded utility companies in USA and found that PP, NPV and IRR are the commonly used investment appraisal techniques. Truong, Partington, and Peat [37] examined the capital budgeting practices of Australian listed firms and found NPV, IRR, and PP to be the most popular evaluation techniques. Nishat and Haq [10] used a questionnaire survey to identify the present application of quantitative capital budgeting methods followed by Pakistani firms. They found that the most popular capital budgeting techniques in Pakistan are NPV and IRR. Khamees et al. [1] used a questionnaire and an interview to examine capital budgeting practices by Jordanian Industrial Corporations (JIC). They found that respondents do not rely on one technique. JIC give almost equal importance to the discounted and un-DCF methods in evaluating capital investment projects. They also observed that the most frequently used technique is the profitability index (PI) followed by the PP. Another study by Al-Azawai [47] that explored the use of capital budgeting techniques by Jordanian listed services firms found that they use DCF techniques, as well as Non-DCF, when assessing capital investment projects. He also found that PP is the most frequently used capital budgeting technique, followed by NPV, PI, accounting rate of return (ARR), and IRR. However, Al-Azawai provided evidence that these practices are not widely used by capital budgeting decision makers of the listed Jordanian services companies since they widely use subjective judgment. Abdulsamad and Shaharuddin [50] examined the use of capital budgeting techniques in publicly listed firms in Malaysia. They found that PP is the most popular technique for those who do not use DCF technique. Shinoda [45] surveyed people in charge of capital budgeting at firms listed on Tokyo Stock Exchange, with a focus on capital budgeting www.videleaf.com practices. He found that Japanese firms manage their decisionmaking by a combination of PP method and NPV methods. Shinoda observed that Japanese firms invest in projects in which their investment can be recovered in a short period. Hence, they tend to adopt the PP. El-Daour and Abu Shaaban [51] examined the capital budgeting techniques in Palestinian public corporations in the Gaza Strip. They found that the Palestinian publicly owned corporations in Gaza strip use the capital budgeting techniques when selecting investment projects. They also found that the PI is the most frequently used technique, while the NPV was found to be the least used technique. They recommended that managers increase the use of the NPV for evaluating proposed investment projects. Andrés, Fuente, and Martin [54] explored the use of capital budgeting practices in a sample of 140 non-financial Spanish companies. They detected that pp was the most frequently used method of capital budgeting. Souza and Lunkes [52] studied capital budgeting practices in a sample of 51 large Brazilian companies traded on the Stock Exchange. They reported that companies frequently use the PP, NPV and the IRR to appraisal investment projects. The surveyed companies showed that they further conduct sensitivity analyses to assess investment risk. The researchers concluded that companies tend to adopt more sophisticated techniques at various stage of capital budgeting. The Relationship between the use of Investment Appraisal Techniques and Corporate Attributes Drury and Tayles [24] examined the impact of company size on the use of financial appraisal techniques and the treatment of inflation by UK companies. They noticed that 63% of the large firms always employ IRR, 50% NPV and 30% always adopt the PP method. They also noticed 86% of the surveyed firms often/always employ the unadjusted payback approach together with a DCF method. They further observed that non-discounted methods continue to be employed by both small and large firms. Graham and Harvey [17] surveyed Chief Financial Officers (CFOs) about the cost of capital, capital budgeting, and capital structure. They found that large firms rely heavily on the PV technique and the capital asset pricing model (CAPM), while small firms use the PP criterion. Grahama and Harveya provided evidence to support the pecking-order and trade-off capital structure hypotheses but little evidence that executives are concerned about asset substitution, asymmetric information, transactions costs, free cash flows, or personal taxes. Zubairi [49] examined capital budgeting decision-making practices of Pakistani firms. He found that large sized firms give preference to IRR, while smaller firms rely more on NPV. He observed that smaller firms are keener in estimating the PP as compared to larger companies. Zubairi concluded that the firms relying more on debt financing or with high growth rates give more preference to the NPV technique, while low leveraged and low growth firms rely more on IRR. Similarly, Nishat and Haq [10] noticed that small firms in Pakistan used PP as their main criteria in the evaluation of capital budgeting proposal. They observed that a single factor CAPM model is used by large firms for ascertaining the cost of capital. Verma et al. [38] surveyed CFOs/Chief Executive Officers (CEOs) of manufacturing companies in India to identify the preferred capital budgeting techniques by these companies. They found that there is a systematic relationship between company related factors like age of a company, CEO education/qualification and the capital budgeting method adopted by it. Ramesh and Nimalathasan [46] examined the use of capital budgeting techniques by a sample selected from manufacturing, pharmaceuticals and chemicals, and textile firms listed on the Colombo Stock Exchange. They found the NPV method to be the most dominant capital budgeting technique according to the perception of executives from all sectors. They also found most executives of manufacturing, pharmaceutical and chemical companies prefer the NPV and IRR methods. Ramesh and Nimalathasan also observed that the executives of the textile sector prefer the NPV method for evaluating capital budgeting in Sri Lanka. Daunfeldt and Hartwig [35] examined the choice of capital budgeting methods used by companies listed on the Stockholm Stock Exchange. They found that the choice of capital budgeting methods is influenced by leverage, growth opportunities, dividend pay-out ratios, the choice of targeted debt ratio, the degree of management ownership, foreign sales, industry, and individual characteristics of the CEO. Andrés et al. [54] explored the use of capital budgeting practices in a www.videleaf.com sample of 140 non-financial Spanish companies. They found that corporate size, and industry are the most important determinant of the choice of capital budgeting techniques. Gupta [42] explored the relationship between capital budgeting practices and the size in a sample of 75 Indian companies. Gupta reported a positive association between the use of DCF techniques and corporate size. Comparison of the use of Investment Appraisal Techniques by Public and Private Firms and across Countries Eljelly and Abu Idris [33] examined the capital budgeting techniques in both public and private sectors in Sudan. They provided evidence that both sectors use capital budgeting techniques. They noticed that PP is the most frequently used method followed by the IRR among the private sector firms and the NPV among the public. Hermes et al. [44] surveyed Dutch and Chinese firms to compare the use of capital budgeting techniques. They found that Dutch CFOs on average use more sophisticated capital budgeting techniques than Chinese CFOs do. They also found that the use of the IRR method does not seem to differ significantly between Dutch and Chinese firms as well as the use of CAPM as a method of estimating the cost of capital. They concluded that the difference between Dutch and Chinese firms is smaller than might have been expected based upon the differences in the level of economic development between both countries. Andor et al. [48] surveyed firms' executives in 10 Central and Eastern Europe (CEE) countries, namely: Bulgaria, Croatia, Czech Republic, Hungary, Latvia, Lithuania, Poland, Romania, Slovak Republic, and Slovenia to explore their capital budgeting practices. They witnessed significant variations in the practices of large and small/medium firms, and between local firms and firms dominated by multinational culture. Andor et al. concluded that capital budgeting practices in CEE countries appear to be influenced mostly by firm size, multinational culture and by inside ownership to a lesser extent. Mohan and Narwal [43] provided a review for all research related to capital budgeting practices they managed to retrieve from their electronic database, and noticed www.videleaf.com that there is an increase in DCF together with PP in both developed and developing countries. However, the authors claimed that there still need for more research about project identification, cash flows estimation and post-audit selected projects. Previous Studies Undertaken in the GCC Countries As far as the GCC region is concerned, few studies have been conducted to examine the capital budgeting practices (see e.g., Qatar: [55,56]; Kuwait: [57][58][59]; United Arab Emirates (UAE): Ahmed, [60]). A brief review of these studies is offered in the following section. Alhamoud and Ibrahim [55] used a questionnaire survey to identify the capital budgeting techniques used by Qatari firms. They found insignificant differences among varying sectors in terms of the utilization of one method over another. They observed that the PP method is the most commonly used followed by the IRR, PI, NPV and the ARR. Mustafa and Hindi [56] used a questionnaire survey to identify the capital budgeting practices employed by the largest firms in Qatar. They found that Qatari companies in general tend to adopt the DCFs methods, with NPV, PI and the IRR being the most widely used methods. They also found that the NPV are IRR are the most frequently used methods, and the PI is the most common method used to rank the different competing opportunities. Mustafa and Hindi also observed that most of the companies estimate the cost of capital, and adopt CAPM with inclusion of some extra risk factors. In Kuwait, Al-Mutairi et al. [57] used a questionnaire to survey 80 CFOs of Kuwaiti listed firms to examine the capital budgeting practices. They found that capital budgeting practices vary depending on firm and management characteristics. They noticed that there is high tendency to use IRR as a capital budgeting technique for investment decisions making. Another study by Al-Mutairi and Hasan [58] to identify current corporate finance practices by Kuwaiti listed and non-listed firms. They found that the CAPM is in use to estimate the cost of capital. They provided evidence that weighted average cost of capital is the most popular rate used due to its simplicity. Also, El-Sady et al. [59] used a questionnaire to survey listed and unlisted Kuwait companies. They found that NPV and PP to be the most popular capital budgeting techniques used to evaluate capital investment among Kuwaiti companies. They concluded that there is no significant difference between listed and unlisted Kuwaiti corporations in their practices of capital budgeting techniques. Ahmed [60] surveyed a sample of companies listed on Dubai Financial Market (DFM) to explore capital budgeting methods. He noticed that a sizeable number of UAE companies use capital budgeting techniques in their capital investment decisions. He also found that the PP, NPV, and IRR are the most frequently used techniques by most UAE companies. Ahmed observed that other financial variables are likely to affect the selection of capital budgeting technique such as the firm size, revenues, profitability, and leverage level. It is evident from the above brief literature review that a limited number of empirical studies have been undertaken to explain the capital budgeting practices in the GCC region. This emphasizes the need for additional empirical testing. Hence, it was important to conduct the current study. Research Methodology Data Collection As mentioned earlier, the objective of this study is to offer empirical evidence on different aspects of capital budgeting practices in a sample of services, manufacturing and real estate companies listed on KSE. To achieve this objective, a questionnaire has been developed in line with previous studies undertaken in a similar area of research (see e.g., [10,33,35,40,45,46,51,55,57,59,60]). Utilizing a questionnaire similar to that used in previous research is important since it facilitates comparison. However, the questionnaire developed in the current study is different to those employed in previous studies in that it covers several aspects of capital budgeting, whereas previous studies mainly focused on the use of capital budgeting techniques and attempted to establish whether corporate sector and characteristics impact the choice of their use. The questionnaire consisted of two parts. The first part requested background information about the respondents; the second part asked respondents to express their level of agreement with different aspects of capital budgeting on 5-point Likert scale ranging between strongly disagree and strongly agree. The main purpose of the second part of the questionnaire was to seek answers to the following research questions: Sources of Capital Budgeting Ideas Capital budgeting is viewed as an important financial management decision. It involves buying expensive assets to be used for a long time and this affects the future success or otherwise of the firm. Taking the right investment decision in the process of capital budgeting assists management and the company in maximizing shareholders wealth. Thus, identifying the right projects is an extremely important part of the capital budgeting process. It was, therefore important to ask the following research question. Research question 1: What are the main sources of capital budgeting ideas? Most frequently used Capital Budgeting Techniques The focus of previous research was on identifying capital budgeting techniques most frequently used in practice. While in some studies traditional capital budgeting techniques appeared to be the most frequently employed techniques, in others the DCF techniques were more in use. Other studies showed a combination between traditional, DCF and advanced techniques. It was, therefore, important to ask the following research question. Research question 2: Which capital budgeting techniques are more frequently used by the Kuwaiti non-financial companies? Factors affect the Choice of the Capital Budgeting Techniques Previous studies attempted to establish the relationship between corporate attributes (size, age, management education, the level of gearing, etc.) and the use of specific capital budgeting techniques. In this study, the focus is made on simplicity and staff familiarity of the capital budgeting techniques. This is mainly affected by staff education and experience. This would shed new light on what determines the use of specific capital budgeting techniques and adds a new dimension to the existing body of the literature. It was, therefore, important to ask the following research question. Research question 3: What are the main factors that affect the choice of the capital budgeting techniques? Obstacles towards using Capital Budgeting Techniques Previous research ignored obstacles towards employing specific capital budgeting techniques. Corporate management might be deterred from appraising capital projects due to the difficulty of collecting the required data to facilitate the analysis and the cost associated with the appraisal. It is possible that some companies opt not to go through the capital budgeting process since it is difficult, time consuming and require highly trained staff to undertake it. Other corporate management might choose to avoid appraising investment projects due to the uncertainty about the outcome of the appraisal. Identifying the main obstacles towards the use of various capital budgeting techniques assists in findings the means to minimize these obstacles. It was therefore important to ask the following research question. Research question 4: What are the obstacles towards using capital budgeting techniques? Non-financial factors that Affect Capital Budgeting Decisions Although maximizing profit and shareholder wealth are the main objectives of the firm, corporate management may consider nonfinancial factors when making capital budgeting decisions. Kuwaiti companies would consider strategic planning, social responsibility, protecting environment and maintaining employee www.videleaf.com morale are factors that would affect capital budgeting decisions and corporate image. It was therefore important to ask the following research question. Research question 5: What are the non-financial factors that affect capital budgeting decisions? Pilot Study and the Distribution of the Questionnaire To increase validity and to ensure its simplicity, understandability and the suitability of the respondents, the questionnaire was piloted to a group of investors in KSE who provided valuable suggestions to enhance participation. The piloted investors' views helped in shortening the questionnaire and improved the quality of the translated Arabic version. Undoubtedly, this step ensured a relatively high response rate. After piloting the questionnaire and during the period between June and December 2016, the researchers distributed a questionnaire to all services, manufacturing and real estate companies listed on the KSE. The covering letter of the questionnaire explained the aim of collecting the data included the questionnaire. The respondents were assured that the collected data will be solely used to conduct scientific research and their anonymity is guaranteed. The questionnaire did not ask any information about the names of respondents or their respective companies. The respondents were only asked to identify their job title. A summary of their response is presented in Table 1. The questionnaires were then entered in an SPSS file for analysis. Cronbach's alpha was used to measure the internal consistency of the collected data. Descriptive statistics have been www.videleaf.com employed to shed some light on the respondents and their response to various aspects of capital budgeting techniques. To identify possible differences in the respondents' answers to the questions included in the questionnaire due to their characteristics, Kruskal-Wallis U test was performed. Findings To measure the internal consistency (reliability) of the collected data, Cronbach's alpha (α) was executed and touched 0.854. In general, a commonly acceptable Cronbach's alpha (α) is ≥ 0.70. Respondents Background Analysis of the collected data disclosed that the average age of the companies where the respondents' work is 30 years. Companies' ages ranged between 7 and 57 years. Table 2 summarizes the main characteristics of the respondents who took part in the questionnaire. It can be witnessed from the table that 51% of the respondents are non-Kuwaitis and the vast majority (82%) are males. This reflects the features of Kuwaiti society where males dominate major business activities in the country and occupy high managerial positions. In addition, the table reveals that more than 80% of respondents are either CEOs or CFOs and fairly represent the services, manufacturing and real estate sectors. Most of them (73%) have academic qualifications in business related studies (accounting, business and finance) and more than 85% of them hold university academic qualifications. The table further showed almost 44% of respondents have more than 10 years of work experience. More than 55% of respondents indicated that their companies embark annually on more than 10 capital investment projects and more than 65% of the respondents claimed that their companies' annual average capital budgeting is more than 6 million Kuwaiti Dinars. www.videleaf.com Kuwaiti Companies Sources of Capital Budgeting Ideas The respondents were asked to identify the sources of capital budgeting project ideas in their companies. The result of their answer is summarized in Table 3. Although the table shows that all ideas included in the questionnaire are considered to be good sources of capital budgeting as reflected by the median, the most important source of capital projects ideas was top management, followed by the people who used the assets. Other sources that appeared in the questionnaire seem to be less important sources of capital budgeting ideas. This result is not surprising since most of the capital budgeting decisions by Kuwaiti companies are mainly taken by top management. Even if the people who use the assets initiate some capital budgeting ideas, the final decision about whether to invest is still taken by top management. www.videleaf.com Capital Budgeting Techniques Employed by Kuwaiti Companies The respondents were asked to specify the technique(s) frequently employed when making capital budgeting decisions. The result of their answers is presented in Table 4. The table demonstrates that all capital budgeting techniques listed in the questionnaire are used by the Kuwaiti companies as mirrored by the resulted median. This result is in line with Drury and Tayles [24] and Pike [26] who found that British companies employ more than one method of investment appraisal. Arora [39] also revealed that most Indian firms surveyed in his study utilize many of the capital budgeting methods. Furthermore, Khamees et al. [1] and Al-Azawai [47] noticed that industrial and services companies in Jordan rely on more than one investment appraisal technique and contended that they almost assigned equal importance to traditional and DCF techniques. The mean, however, illustrated that the NPV is the most frequently used capital budgeting technique followed by the PI. The result is predictable since after calculating the NPV, it is easy to obtain the PI for the appraised project and both give the same result. Consequently, the respective mean of each of the techniques are almost identical. What attracts attention in the table is that the DCF techniques are most frequently employed by Kuwaiti companies listed on the KSE. The result is consistent with results reported by Alhamoud and Ibrahim [55] and Al-Azawai [47]. These studies have been undertaken in three Arab countries (Qatar, Jordan and Palestine) and reported both NPV and PI as the most frequently used capital budgeting techniques. Another study undertaken by El-Daour and Abu Shaaban [51] showed that while PI is the most frequently used investment appraisal technique in Palestine, whereas, NPV appeared to be the least used technique. The result is, however, partially consistent with Babu and Sharma [40], Apap and Masson [19] and Truong et al. [37] who demonstrated that PP, NPV and IRR are the most widely used investment appraisal techniques among utility companies in South Africa. The result is also partially consistent with Eljelly and Abu Idris [33] who observed that NPV frequently used by public companies in Sudan. Similarly, Shinoda [45] showed that Japanese companies frequently use www.videleaf.com NPV and PP methods. However, Arora [39] found the discounted PP to be the most popular investment appraisal technique employed by Indian companies. Eljelly and Abu Idris [33] found PP followed by the IRR to be used more frequently by private companies in Sudan. Ramesh and Nimalathasan [46] surveyed the use of capital budgeting techniques in the manufacturing, pharmaceutical and chemicals and textile companies listed on Colombo Stock Exchange and detected that NPV the most frequently used technique by companies from all sectors. Yet, the result is inconsistent with Arnold and Hatzopoulos [27], Hogaboam and Shook [18] and Hermes et al. [44] who conducted their studies in developed countries and found that the companies operating in these countries use more sophisticated investment appraisal techniques than the traditional and DCS ones. What attracts attention is the result appeared in this study is inconsistent with Al-Mutairi et al. [57] and El-Sady et al. [59] who conducted their studies in Kuwait. The former publicized IRR as the most frequently used investment appraisal technique, whereas the latter pointed to the NPV and PP. Inconsistent results might be due to the time difference, the surveyed companies and the respondents in the survey. Companies are expected to change their choice of investment appraisal techniques as time goes by. This study targeted services, manufacturing and real estate companies, while the other did not specify the sector(s) of the targeted companies. Finally, while those who took part in this study were CEOs, CFOs and other, participation in the other two studies was restricted to CFOs. [49] who indicated that Pakistani companies relaying on debt financing or realizing high growth rates tend to employ NPV technique, whereas low leveraged and low growth rate companies use IRR technique more frequently. Verma et al. [38] documented that the choice of using specific capital budgeting technique is influenced by corporate age and CFOs/CEOs education and qualifications. Maroyi and Poll [14] indicated that the main reason for using a specific investment appraisal technique was its superiority. Daunfeldt and Hartwig [35] found that the choice of capital budgeting techniques is influenced by leverage, growth opportunities, dividend pay-out ratios, the choice of targeted debt ratio, the degree of management ownership, foreign sales, industry, and individual characteristics of the CEO. Ahmad [60] demonstrated that the selection of investment appraisal techniques by companies listed on DFM is influenced corporate size, revenue, profitability and leverage. Andor et al. [48] who looked into a sample from 10 Central and East European countries noticed that while corporate size and multinational culture affect the choice of investment appraisal techniques, but inside corporate ownership affect the choice but at a lower degree. www.videleaf.com Obstacles towards the use of Capital Budgeting The respondents were asked to identify possible obstacles that prevent them from using capital budgeting techniques. A summary of their answers are presented in Table 6. According to the Non-Financial Factors Affect the use of Capital Budgeting Techniques Although there was a great emphasis on the financial aspects in the literature of capital budgeting, researchers such as Adler [53], Meredith and Mantel [62], Mohamed and McCowan [63], and Love, Holt, Shen, Li, and Irani [64] highlighted the need to consider financial and non-financial aspects when making capital budgeting decisions. These researchers believe that capital budgeting decisions is sophisticated and they go beyond financial aspects. Hence, the respondents were asked to ascertain whether non-financial factors are likely to influence the use of capital budgeting techniques. As appeared in Table 7, the respondents state that non-financial factors such as strategic planning, corporate image, employees' capabilities and protecting environment are considered when making capital budgeting decisions. This result is partially in line with the outcome of research undertaken Batra and Verma [41] who showed that most of the companies covered in their study pay attention to the nonfinancial factors when they make capital budgeting decisions. The decisions take into account corporate objectives and strategy together with customer market/demand analysis. They also take into account availability of raw material, power, manpower, suitable project location, technology, employees and public safety, necessity to maintain existing product lines, meeting competition, government legislations and environmental factors. The result is also partially consistent with Petty, Scott, and Bird [65] Differences among the Respondents about the use of Different Capital Budgeting Techniques [ To establish whether differences in the respondents' characteristics have any effect on the use of capital budgeting techniques, Kruskal-Wallis test was performed and the result is reported in Table 8. Respondents' characteristics such as age, last academic qualifications, their academic major and the industry where they work were all tested. It is evident from the table that the respondents were consistent about the use of capital budgeting techniques except few cases. The respondents showed significant differences in using PI and IRR. These two techniques are more difficult than others and they can be estimated by using some software packages. New and young graduates are more technology literate and they are expected to utilize their skills to estimate the IRR. Respondents' last academic qualifications appeared to correlate with significant differences with the use of the IRR. Once again, IRR is relatively difficult to calculate and it requires possessing some computing skills. Postgraduates are expected to have advanced technological knowledge, skills expertise to enable them to estimate the IRR more than the undergraduate. Finally, significant differences appeared between industry and the use of PP and IRR. Given that the respondents represent three different industries and the companies they represent vary in their nature and the size of the targeted capital projects, it is normal to witness significant differences in the capital budgeting technique that they adopt. Conclusion In this study, the attempt is made to explore different aspects of capital budgeting practices of non-financial companies listed on www.videleaf.com KSE. The study looked at the sources of capital budgeting ideas, capital budgeting techniques most frequently used, what determines the choice of the capital budgeting technique, obstacles towards using capital budgeting techniques and nonfinancial aspects that might affect capital budgeting decisions. To achieve the objectives of the study, during the period between June and December 2016, a questionnaire was distributed to all services, manufacturing and real estate companies listed on the KSE. 60% of these companies completed the questionnaire. The result of the analysis revealed that top management and people who used the assets are the main sources of capital budgeting ideas. The analysis also revealed that NPV and PI are the most frequently used capital budgeting techniques and the choice of the technique is determined by the nature of the project under assessment and the academic and professional capabilities of corporate staff. The analysis further demonstrated that factors such as uncertainty about the outcome of the capital budgeting techniques and lack of required data and information to use capital budgeting techniques could prevent the Kuwaiti nonfinancial companies from adopting capital budgeting techniques. Furthermore, the analysis disclosed that non-financial factors such as strategic planning, corporate image, employees' capabilities and environment protection are taken into consideration when making capital budgeting decisions. Finally, Kuwaiti companies either possess technology or have the required resources to install advanced technology to assist them in employing sophisticated capital budgeting techniques that take into account inflation and risk. The companies should make use the detailed information published on the net to collect necessary data about various investment opportunities. This would ensure results that are more accurate and minimize uncertainty about the outcome of the capital budgeting decisions and encourage companies to used capital budgeting techniques more frequently. Although an additional study on capital budgeting sheds more light on the gap between theory and practices and assists both academics and practitioners to benefit from this study, the surveyed sample in the current study is not fully representative since it covers only 3 out of 14 industries that form KSE. In addition, the focus of the current study was on companies listed on KSE. To give a complete picture about capital budgeting practices in Kuwait, unlisted companies need to be surveyed in future research. Thus, generalizations of the obtained results should be made with caution. Moreover, to increase the response rate, the questionnaire did not include questions about the company's name and its financial characteristics. Availability of such information would form a basis to studying the effect of various corporate characteristics on capital budgeting practices.
2018-12-11T01:36:24.109Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "d125ee9a3d603b5783ef190d40f7ae65f6b8d284", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23322039.2018.1468232?needAccess=true", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "a8ec665826d78b64edecaff0f429c682e2971458", "s2fieldsofstudy": [ "Business", "Economics" ], "extfieldsofstudy": [ "Business" ] }
12747256
pes2o/s2orc
v3-fos-license
Constrained Nonlinear Model Predictive Control of an MMA Polymerization Process via Evolutionary Optimization In this work, a nonlinear model predictive controller is developed for a batch polymerization process. The physical model of the process is parameterized along a desired trajectory resulting in a trajectory linearized piecewise model (a multiple linear model bank) and the parameters are identified for an experimental polymerization reactor. Then, a multiple model adaptive predictive controller is designed for thermal trajectory tracking of the MMA polymerization. The input control signal to the process is constrained by the maximum thermal power provided by the heaters. The constrained optimization in the model predictive controller is solved via genetic algorithms to minimize a DMC cost function in each sampling interval. INTRODUCTION Model predictive control (MPC), is a model based advanced process control (APC) technique that has been proved to be very successful in controlling highly complex dynamic systems. It naturally supports design for MIMO and time-delayed systems as well as state/input/output constraints. MPC is generally based on online optimization but in the case of unconstrained linear plants, closed form solutions can be derived analytically. However, when there are constraints over the control inputs (i.e. actuators) and/or process states, which is often the case, an online (i.e. real-time) constrained optimization problem has to be solved in each sampling interval, even if the plant model is linear and time invariant. This online optimization usually requires a high computational power; however, since chemical processes are typically of slow dynamics, such controllers have been designed and implemented on various chemical plants with great success. Moreover, due to recent advancements of computational hardware and software tools, the usage of MPC is rapidly expanding to other control domains including electrical machines, renewable energy, aerospace and automotive control systems. In the past two decades, the effective control of polymerization processes control has been studied by many authors [1][2][3][4][5][6][7]. Polymerization kinetic is usually complex due to the nonlinearity of the process. Therefore, the control of the polymerization reactor has been staying a challenging task. Due to its great flexibility, a batch reactor is suitable to produce small amounts of special polymers and copolymers. The batch reactor is always dynamic by its nature. It is essential to have a suitable dynamic model of the process. Rafizadeh [1] presented a review on the proposed models and suggested an online estimation of some parameters. His model consists of the oil bath, electrical heaters, cooling water coil, and reactor. Peterson et al. [2] presented a non-linear predictive strategy for semi batch polymerization of MMA. Soroush and Kravaris [3] applied a Global Linearizing Control (GLC) method to control the reactor temperature. Performance of the GLC for tracking an optimum temperature trajectory was found to be suitable. DeSouza et al. [4] studied an expert neural network as an internal model in control of solution polymerization of vinyl estate. In their study, they compared their neural network control with a classic PID controller. Clarke-Pringle and MacGregor [5] studied the temperature control of a semi-batch industrial reactor. They suggested a coupled nonlinear strategy and extended Kalman filter method. Mutha et al. [6] suggested a non-linear model based control strategy, which includes a new estimator as well as Kalman filter. They conducted experiments in a small reactor for solution polymerization of MMA. Rho et al. [7] assumed a first order model plus dead time to pursue the control studies and estimated the parameters of this model by on line ARMAX model. Nonlinear predictive control of the batch reactor considered [8] and [9] via PCA and Wiener modeling approaches, respectively. When MPC is formulated as a state feedback controller, the full state information is required which must be provided using state estimators in nonlinear H2 (e.g. EKF) or nonlinear H∞ paradigms [10][11][12], [27][28]. Rafizadeh [13] designed a sequential linearization adaptive controller for the solution polymerization of methyl methacrylate in a Batch Reactor. This paper presents a constrained model predictive control of an MMA reactor, based on the genetic algorithm optimization. A previously developed mechanistic model of the process was used. The model is a sequential piecewise linearization along a selected temperature trajectory. The piecewise linear model is used both for the plant output calculation through the prediction horizon and for the closed loop simulation of the controller, using a time-triggered switching mechanism. We are using an output feedback MPC, therefore, no state estimator is required, which is advantageous. The results of tracking the trajectory and eliminating noise and disturbances show a promising performance of the controller. POLYMERIZATION MECHANISM Methyl methacrylate normally is produced by a free radical, chain addition polymerization. Free radical polymerization consists of three main reactions: initiation, propagation and termination. Free radicals are formed by the decomposition of initiators. Once formed, these radicals propagate by reacting with surrounding monomers to produce long polymer chains; the active site being shifted to the end of the chain when a new monomer is added. During the propagation, millions of monomers are added to o P 1 radicals. During termination, due to reactions among free radicals, the concentration of radicals decreases. Termination is by combination or disproportionate reactions. With chain transfer reactions to monomer, initiator, solvent, or even polymer, the active free radicals are converted to dead polymer. Table 1 gives the basic free radical polymerization mechanism [14]. The free radical polymerization rate decreases due to reduction of monomer and initiator concentration. However, due to viscosity increase beyond a certain conversion there is a sudden increase in the polymerization rate. This effect is called Trommsdorff, gel, or auto-acceleration effect. For bulk polymerization of Methyl Methacrylate beyond the % 20 conversion, reaction rate and molecular weight suddenly increase. In high conversion, because of viscosity increase there is a reduction in termination reaction rate. MATHEMATICAL MODELING OF POLYMERIZATION The polymer production is accomplished by a reduction in volume of the mixture. The volumetric reduction factor is given by: The instantaneous volume of mixture is given by: The parameter β is defined as: During the free radical polymerization, the cage, glass, and gel effects occur. For the cage effect, the initiator efficiency factor is used. The CCS (Chiu, Carrat and Soong) model is used in this study to take into consideration the glass and the gel effects. Therefore, propagation rate constant, p k , is changing according to: Similarly, termination rate constant, t k , is given by θ and t θ are adjustable parameters related to propagation and termination rate constants, respectively. All other necessary parameters and constants for this model are given in [1], [14] and [15]. Long Chain Approximation (LCA) and Quasi Steady State Approximation (QSSA) are used in this study. Equations are highly nonlinear and, using Taylor expansion series, these equations were converted to linearized form. The linearized state space form is given by: The molecular properties of the produced polymer are controlled by ensuring the reaction temperature is changing according to a desired reference trajectory. This is a tracking control problem which we are solving using MPC. Figure 1 shows the result of model validation [14]. As it is seen, the model is a good representative of the process. Equation (7) is converted to the transfer function for reaction temperature to input power: Constrained Nonlinear Model Predictive Control of a Polymerization Process via Evolutionary Optimization The result of sequential linearization is 131 transfer functions along the temperature profile. See [13] and [14] for a more detailed description of the MMA polymerization dynamic modeling. Experimental Setup A schematic representation of the experimental batch reactor setup is shown in Figure 2 [14]. The reactor is a Buchi type jacketed, cylindrical glass vessel. A multi-paddle agitator mixes the content. Two Resistance Temperature Detectors (RTDs) of PT100 type were used with accuracy of to measure the reactor temperature and the oil temperature in the oil bath. PT100 sensors provide good linearity in the measurement range and negligible drift. Methyl Methacrylate and Toluene were used as monomer and solvent, respectively. Benzoyl Peroxide (BPO) was used as the initiator. The heater, heats the oil circulating the oil bath, which is pumped into the reactor. Cool water is circulated in a coolant coil inside the oil bath through an electric on/off valve and acts as a safety feature to prevent the oil and consequently the reactor from overheating. The RTD outputs are converted into 0-10 VDC through a bridge and an instrument amplifier and are read by the data acquisition card A/D channels. The controller output is fed into a MOSFET-based power electronics switching circuit as PWM signals. The control command is applied to the MOSFETS through optical isolation, opto-couplers, which provide isolation from a three phase 220V, 50Hz power line. The maximum heater power available is 1000 W, which is a constraint on the control signal. Model Predictive Control Due to its high performance, model predictive control method has received a great deal of attention to control chemical processes, in the last few years. Figure 3 shows block diagram of a model predictive controller. There are three main approaches to model predictive control, MAC (Model Algorithmic Control), which is based on system's impulse response, DMC (Dynamic Matrix Control), which uses the process step response samples, and GPC (Generalized Predictive Control), which is based on the process transfer function. In practice, it is easier to obtain step response samples rather than impulse response or a full transfer function, and therefore the DMC method is more popular. We use the DMC method in this research. The cost function is defined as: is the reference input, the following filtered form is used as the tracking trajectory: The parameter α changes the place of the first order smoothing filter pole; the smaller α the faster output will become. It has been shown that system robustness can be decreased by the reduction of α and increment of the [16]. Expanding the summations and substituting the quadratic forms, the cost function in (9) can be rearranged in a matrix form as There is no pure time delay in the model, therefore, 1 N is zero, then: In forming (14) which is called programmed MPC. If the future set points are unknown, d y is assumed to be constant during the prediction horizon, i.e.: which is called non-programmed MPC. For an LTI system, without any constraints on output or control signal, the above optimization problem has the following closed form: (19) and i g s are the step response samples. The matrix + G is a Toeplitz matrix consisting the step response samples as shown in (19). There is no closed form solution for the formulated constrained optimization problem. Hence, an online optimization algorithm (a genetic algorithm) is applied to solve the problem, which will be discussed later. The model output is: where N is the number of system step response samples reaching to steady state or equivalent impulse response steps which lead to zero; and N g is the system DC gain. THE MODIFIED DMC If the system has any poles close to the origin, the step response will be very slow and the required N is very large. A system including integrator never reaches to the steady state (this case exists in the set of linearized models of the MMA reactor) and N approaches infinity. Hence, instability occurs. This is one of the limitations of the standard DMC formulation, making it only applicable to open loop stable system [17]. To overcome this, one practical solution is as follows. We have where Past Y is the effect of past inputs on the future system outputs without considering the effect of present and future inputs. Consequently, Past Y can be calculated by setting the future u ∆ s equal to zero and solving the model P steps ahead. As seen in (15) and (19), + G and + ∆U are independent of N . The only thing determined by N is the dimension of − G , which is omitted now. Therefore, using this technique, DMC computations become independent of N . This modified formulation can be used for marginally stable and unstable plants alike. As discussed in the previous section, we have used a piecewise linear model of the MMA polymerization process. In our application, since the valid model for future time steps may change through the course of predictions, in the computations of Past Y , in order to predict the future model outputs, the corresponding valid models are used. In other words, the model used for output prediction is scheduled through the prediction horizon, exploiting the developed trajectory linearized model. This virtual model switching is utilized to calculate the optimal control sequence in each time step. Apparently, another model switching is also applied through the course of time simulation of the closed-loop control system. Furthermore, since the whole desired temperature profile is known a priori, the programmed MPC is used, utilizing the known future desired temperature trajectory. 4 B GENETIC ALGORITHM OPTIMIZATION In general, there is no closed form solution for MPC, except the linear time-invariant unconstrained case. Otherwise, the optimization problem should be solved numerically. If the solution space is convex, sequential quadratic programming techniques (SQP) could be used. Otherwise, either the optimization problem should be a convexified through approximations and relaxation methods or a global optimization algorithm must be used. Genetic algorithms (GA), are randomized global searching methods developed from the evolution rule in ecological world (the genetic mechanism of survival of the fittest). They have internal implicit parallelism and better optimization ability. By the optimization method of probability, they can automatically obtain and instruct the optimized searching space and adjust the searching direction autonomously [18]. Genetic algorithms, first introduced by Holland [19], are robust global random search methods. These methods are founded based on the Darwinian concept of natural selection and evolution [20], and have been used extensively in optimization and control [21][22][23]. Coding is essential for the GA optimization. Coding is a mapping from solution space to a finite length strings set. Binary coding is the most generally used method. In this method, each point of the solution space is coded as a binary string, which is a permutation of 0 and 1. Each 0 or 1 is called a gene and the string is called a chromosome. Every potential solution, , which is a point in the solution space, is coded as a binary word by length of , is a u N binary word, called a subchromosome. Figure 4 shows the binary coding of the + ∆U vector as a chromosome. At the beginning, an initial population that consists of some potential solutions is randomly selected. The final solution is concluded by an iterative method that leads the initial population to an evolutionary one. In each iteration, the next population is generated by applying the crossover and mutation operators to the selected individuals. Their offspring makes the next population. Their selection probability depends on their fitness function that is a measure of goodness. The encoding method and fitness function definition are the only links between the physical problem and GA optimization. The following fitness function is applied: In this stage, pairs of the k-th population, k p , are selected to reproduce their offspring. The tournament selection strategy, which is a stochastic method, is applied to select each parent. In this method, two individuals are randomly selected and their best (according to their fitness value) is the winner of the tournament, then, the parents are returned to k p . c. Crossover This operator crosses parents to produce the new offspring by gene interchanging. Crossover may occurs in one or more positions in chromosomes. Researchers have suggested several crossover operators such as one point, multipoint and uniform crossover. In this study, a multipoint crossover operator is used to interchange genes between subchromosomes of parents. Figure 5 shows the multipoint crossover genetic operator. The crossover sites are determined randomly. d. Mutation In this phase, a random gene from chromosome is selected and its value will be changed. To do so, a random number between 0 and 1 is generated and compared with the predetermined mutation probability ( Fig.4. Binary coding of + ∆U as a chromosome directly transferred to the next one to prevent deterioration. The details of optimization algorithm are depicted in Figure 6. In the controller simulation, the optimization is conducted on a population of 50 chromosomes. As mentioned earlier, GA optimization belongs to the family of randomized search algorithms. It is worth mentioning that, in general, there are no theoretical proofs of the speed of convergence of randomized algorithms (convergence within certain time frame, if any). However, since the cost function is convex and the chemical processes are generally slow, allowing to have sampling periods in the order of tens of seconds, the speed of convergence should not be a concern here, provided that the population size and the evolution parameters are selected properly. At the same time, premature convergence is avoided by ensuring good genetic variation. The genetic variation can be regained by using a large enough population size and also by mutation [25]. On a separate note, in fast systems with millisecond sampling times, the optimization algorithm may not converge within the sampling interval and the optimization computation might be halted. In such cases, especially in mission-critical and safety-critical applications, hybrid algorithms must be utilized using FSQP (Feasible SQP) solvers, to ensure feasibility (satisfaction of all constraints) of the optimizer in each and every iteration even before convergence. 5 B SIMULATION RESULTS The population is the foundation of evolution of a genetic algorithm. The character of the population decides the search capability of the genetic algorithm. The astringency of the genetic algorithm is determined by the astringency of the population [24]. First, the effects of population size and the number of generations on controller performance are studied. As it is shown in the Our several simulations show that the GA optimization algorithm converges well within the selected MPC sampling period in all runs. Figure 8 shows the simulation results of controller performance with a population size of 50 and the number of generations equal to 750. The top plot shows the systems output verses the desired thermal trajectory and exhibits controller's high performance. The middle plot is the control signal, which as seen in the plot, satisfies the constraint. The bottom plot is the tracking error, i.e. the difference between the actual and desired outputs. The DMC controller provides integral action, capable of rejecting step disturbance. The ability of controller to reject noise and disturbance is shown in Figure 9, in which a step output disturbance and a zero mean white Gaussian measurement noise with a variance of ) are applied. As seen, the controller has good disturbance rejection performance and the average absolute tracking error is less than . Compared to the previous results, the adaptive PI control in [13] has a 2° C average error and the Generalized Takagi-Sugeno-Kang fuzzy controller proposed in [26] has a 1°C average error; demonstrating the superior performance of the MPC. CONCLUSION A sequential piecewise linearized model based predictive controller based on the DMC algorithm was designed to control the temperature of a batch MMA polymerization reactor. Using the mechanistic model of the polymerization, a transfer function was derived to relate the reactor temperature to the power of the heaters. The coefficients of the transfer function were calculated along the selected temperature trajectory by sequential linearization. A genetic algorithm (GA) is applied for the cost function optimization DMC. The simulation result of controller performance shows that the tracking the profile, noise and disturbances rejection is very good.
2015-02-14T17:11:51.000Z
2014-01-27T00:00:00.000
{ "year": 2015, "sha1": "e21bd5092d85aae90dde0a32fe3bddfb30171e68", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=42868", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "e901786292a3ac7726b12ae763f66492d36f4f23", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
119350376
pes2o/s2orc
v3-fos-license
Luttinger liquid behaviour and superconducting correlations in t-J ladders The low energy behaviour of the isotropic t--J ladder system is investigated using exact diagonalization techniques, specifically finding the Drude weight, the charge velocity and the compressibility. By applying the ideas of Luttinger liquid theory, we determine the correlation exponent $K_\rho$ which defines the behaviour of the long range correlations in the system. The boundary to phase separation is determined and a phase diagram is presented. At low electron density, a Tomonaga-Luttinger-like phase is stabilized whilst at higher electron densities a gapped phase with power law pairing correlations is stabilized: A large region of this gapped phase is found to exhibit dominant superconducting correlations. I. INTRODUCTION Over the last few years, the behaviour of strongly correlated electrons confined to coupled chains has received widespread attention; the reasons for this are numerous. Firstly, the behaviour of electrons in one-dimension, under t-J or Hubbard type interactions, is now relatively well understood and described generally by the term Luttinger Liquid (LL). The coupling of two such Luttinger liquids as in the ladder geometry provides an interesting first step towards the challenge of describing the behaviour in two-dimensional systems. A second reason for interest in these systems lies in the unusual nature of the ground state in the undoped system, namely spin liquid behaviour with a finite gap in the spin excitation spectrum 1 ; this behaviour is in contrast to the gapless behaviour of a single chain. The evolution of the spin-gapped state on doping has obvious relevance to gapped superconducting behaviour. In addition, compounds such as (V O 2 )P 2 O 7 2 and SrCu 2 O 3 3 are believed to be well described by a lattice of coupled chains. Very recently, experiments on La 1−x Sr x CuO 2.5 4 have provided insight into the doping of coupled chain systems. Whilst there is considerable literature on many aspects of the t-J ladder behaviour, a complete picture is still far from being realised: Our aim in this paper is to clarify some of the behaviour of the t-J ladder system by drawing on some of the ideas used in describing the strictly one-dimensional systems (i.e. LL theory). In conjunction with results from several other techniques, we will then present a speculative phase diagram. The t-J Hamiltonian on the 2 × L ladder is defined as, H = J ′ j (S j,1 · S j,2 − 1 4 n j,1 n j,2 ) +J β,j (S j,β · S j+1,β − 1 4 n j,β n j+1,β ) −t j,β,s P G (c † j,β;s c j+1,β;s + H.c.)P G −t ′ j,s P G (c † j,1;s c j,2;s + H.c.)P G , where most notations are standard. β (=1,2) labels the two legs of the ladder (oriented along the x-axis) while j is a rung index (j=1,...,L). We shall concentrate on the isotropic case where the intra-ladder (along x) couplings J and t are equal to the inter-ladder (along y) couplings J ′ and t ′ . At half filling the hamiltonian reduces to the Heisenberg model and the behaviour is generally relatively well understood 1 . A simple interpretation is given by considering the strong coupling limit (J = 0) in which the ground state consists of a singlet on each rung with a spin gap (∼ J ′ ) which corresponds to forming a triplet on one of the rungs. With the introduction of intra-chain coupling J, the triplets can propogate and form a coherent band thereby reducing the spin gap. In the isotropic case, the spin gap remains (∼ 0.5J). The evolution of this spin gapped state on doping is perhaps one of the most interesting aspects of the ladder behaviour 5 . Recent work on this hole-doped phase 8,9,16 has indicated a finite spin gap, a single gapless charge mode, hole pairing and possible dominant superconducting correlations. In this paper we shall discuss some new independent results which provide a more complete description of not only this gapped phase but the whole region of parameter space. A possible phase diagram for the isotropic t-J ladder as a function of J/t and doping has been proposed recently 8 and we use these ideas in our analysis: Away from half filling the spin-gapped phase is stabilized up to J/t ∼ 2.1 where the system phase separates 10 . As the system is doped further, a phase with a single gapless spin and a single gapless charge mode is found and, as we shall explain, this behaviour is like that of the one-dimensional Tomonaga-Luttinger-Liquid system. As in the gapped phase, phase separation occurs for large J/t. At very small electron densities, an electron paired phase exists. Note that although in the t-J ladder there are four possible zero-momentum gap-less modes (two spin and two charge), throughout the phase diagram only one gapless charge mode and either zero or one gapless spin modes are observed; this then allows an almost identical treatment to the strictly onedimensional case. In section II we discuss briefly the Luttinger Liquid theory used to describe strictly one-dimensional systems and also the possible application of this theory to coupled chains; In section III we present our numerical calculations, and finally in section IV we apply the Luttinger Liquid theory to our results and present a phase diagram for the system. II. LUTTINGER LIQUID BEHAVIOUR: TOMONAGA-LUTTINGER AND LUTHER-EMERY PHASES In dealing with strictly one-dimensional interacting fermion systems, one can in general make use of conformal field theory 11 and bosonization 12 which allow a determination of the decay exponents of the various correlation functions to be determined from the low energy behaviour of the model. The general idea is that onedimensional interacting fermion systems can be mapped onto the Fermi-gas model and the corresponding 'g-ology' weak coupling theory 13 . This Fermi gas model scales to two different regimes, namely the Tomonaga-Luttinger (TL) fixed point and the Luther-Emery (LE) line which are relevant for repulsive (g 1 > 0) and attractive (g 1 < 0) backscattering matrix elements respectively; as we shall explain, the important difference between these two universality classes lies in the spin degrees of freedom. The low lying excitations of the Fermi gas model are collective spin or charge density oscillations, which propagate with different velocities, giving rise to spin-charge separation and power law behaviour of the correlation functions. In the TL phase, both a gapless spin and a gapless charge mode are exhibited; In contrast, whilst exhibiting a gapless charge mode, the LE phase has a gap to spin excitations. Conformal field theory relates the properties of a finite system (such as the compressibility and the Drude weight) with the correlation exponents; One coefficient (K ρ ) determines the exponents of all the power law decays, (and similarly the singularity of the momentum distribution function close to k f ). Hence if a particular model scales to the TL (or LE) universality class, one can infer the dominant correlations from the low energy behaviour of the system which can be deduced from much smaller system sizes than would be required to calculate the correlation lengths directly. The relationships between the correlation exponent (K ρ ) and the low energy behaviour of the model are given below for a system of size N, and length L ( we have chosen to give general equations such that N=L for a chain, N=2L for the ladder geometry). Firstly, the ratio of the charge velocity u ρ to the coefficient K ρ is proportional to the variation of the ground state energy E 0 with particle density n, i.e. the inverse compressibility π 2 The coefficient K ρ is also related to the Drude weight σ 0 ; this Drude weight is the weight of the zero frequency (dc) peak in the conductivity σ ω and may be obtained by considering the curvature of the ground state energy level as a function of threaded flux, Φ We can also determine the charge velocity by considering the dispersion of the energy spectrum where E 1ρ is the lowest lying charge mode to the ground state (E 0 ) with neighbouring k value. These equations provide us with 3 independent conditions on K ρ and u ρ which can be used to check the consistency of the Luttinger liquid relations. Also a calculation of the parameter K ρ is relatively straightforward and this parameter then determines the exponent coefficients of all the correlation functions. As explained previously, the essential difference between the TL and LE fixed points lies in the spin degrees of freedom: the LE region gapped whilst the TL gapless. The correlation exponents of the two different cases are summarised in table I (taken from reference 6 ). We have omitted the logarithmic corrections and these are detailed in reference 14 . We emphasise that for LL systems, the correlation functions show either power law or exponential decay, with interaction-dependent powers determined by one coefficient K ρ . Also we see that for K ρ < 1 (spin or charge) density waves at 2k f are enhanced and diverge, whereas for K ρ > 1 pairing fluctuations dominate. Whilst in strictly one-dimensional systems there is a single gapless charge mode, the theory could equally well be applied in a case where more than one gapless mode existed if the excitations were decoupled in the low energy regime. In such a case, each degree of freedom would have an associated operator algebra, i.e. an associated K ρ . Specifically considering the problem of coupled chains, we note that at present little is known and there is much interest into how to connect the quasi-one-dimensional results to the strictly one-dimensional case. A recent calculation by Schulz 15 has considered the coupling of two Luttinger liquids by a small interchain hopping using a bosonization technique. Interestingly, in the presence of both forward and backward scattering terms, the calculation predicts a gap in all the magnetic excitations and a gapless charge mode (as observed for the hole-doped region of the t-J ladder). This gapped phase however exhibits somewhat different correlations to the strictly one-dimensional LE phase described above. Firstly the CDW π and SDW π correlations decay exponentially along the chains (we use the notation of Schulz where 0 or π indicate the oscillations are in or out of phase between the two chains respectively, i.e. 'bonding' or 'antibonding'). A divergent density-density response, decaying as ∼ cos 2(k 0 f + k π f )r r −Kρ , exists in analogy with the 4k f oscillations of a single chain (k 0 f and k π f refer to the Fermi points of the bonding and antibonding quasi particle branches respectively); we note that the coefficient K ρ differs by a factor of two to that used by Schulz since we have chosen to define σ 0 and κ as the Drude weight and compressibility per site rather than per rung. The superconducting correlations (cross chain pairing) decay as r −1/Kρ and exhibit a 'd'-like character. Hence for this gapped phase, we would expect dominant superconducting correlations for K ρ > 1. In agreement with these findings, different approaches by Troyer et al 16 , Nagaosa 17 and and Balents and Fisher 18 predict similar behaviour in the spin-gapped region: i.e. 'Luther-Emerylike' in the sense that there exist two order parameters (analogous to the on-site pairing and 2k f CDW in the LE class) whose exponents obey a reciprocal relation: these correspond to the pair field correlations and to a four fermion operator n B (r)n B (0) , where n B is the density of 'bosonic' hole pairs bound on a rung (analogous to the 4k f oscillations of a single chain (i.e.2k 0 f +2k π f )). In table II we summarize the correlation exponents predicted for this spin gapped phase in the ladder geometry. In the following section we give details of calculations using these ideas to characterize the behaviour of the t-J ladder, finding the parameter K ρ and hence the dominant correlation functions. We consider various electron densities and various ratios of J/t to build up a speculative phase diagram. III. NUMERICAL CALCULATIONS Since we require information concerning the low energy properties of the model, the dominant technique we have employed is that of exact diagonalization of finite systems, specifically 2 × 5 and 2 × 10 double chain rings. Exact diagonalization techniques are particularly well adapted to the investigation of low energy modes since implementation of various quantum numbers is straightforward; the various excitation modes can be obtained by calculating the ground state energy in each symmetry sector. The low energy modes of the system are characterized firstly by their spin: singlet and triplet excitations correspond to charge and spin modes respectively. It is also useful to consider the parity of the states under a reflection in the symmetry axis of the ladder along the direction of the chains: Even (R x = 1) or odd (R x = −1) excitations corresponding to bonding (B) or anti-bonding (A) modes respectively (0 and π as used by Schulz 15 ). Finally the dispersion relation of each mode is determined by the momentum k x =2πn/L. In order to ensure that the antiferromagnetic correlations are not frustrated when one goes around each chain, we have chosen the electron number to be always a multiple of four; our results then concern electron densities 0.4 and 0.8 for both system sizes and in addition 0.2 and 0.6 for the larger system size. The absolute ground state is given by the boundary conditions that form a closed shell in the non-interacting Fermi sea (obtained by turning off the interaction, J) and these are used in the calculations of u ρ and κ, specifically anti-periodic boundary conditions for n < 0.5 and periodic boundary conditions for n > 0.5. A. Drude Weight and Anomalous Flux Quantization The first calculation we present concerns the Drude weight, defined by equation 3. The numerical technique involves threading the double chain ring with a flux Φ and studying the functional form of the ground state energy with respect to the threaded flux, namely E 0 (Φ). In general E 0 (Φ) consists of a series of parabola, corresponding to the curves of the individual many body states E n (Φ): This envelope exhibits a periodicity of one, where we have chosen to measure the flux in units of the flux quantum Φ 0 = hc/e. Note that the function E 0 (Φ) also gives a quantitative value of the superfluid density D s , which is in general different from σ 0 19 . The Drude weight corresponds to curvature of a single ground-state many-body energy level whilst the superfluid density corresponds to the curvature of the envelope of the individual many-body states as a function of flux. However, since the flux (φ c ) at which another many body energy level crosses the zero-flux ground state energy level varies as φ c ∼ (hc/e)L 1−d (where d is the dimension) 19 , in onedimension φ c is independent of L; there are only a finite number of energy level crossings in the the thermodynamic limit and σ 0 and D s are equal (up to a factor of 2π). In addition to the Drude weight and the superfluid density, the function E 0 (Φ) also yields information regarding the phenomenon of anomalous flux quantization; this has been explained in a previous publication 9 so we mention it only briefly here. Whilst in general the ground state envelope E 0 (Φ) exhibits a periodicity of one, Byers and Yang 20 have shown that in the thermodynamic limit E 0 (Φ) exhibits local minima at quantized values of flux, the separation of which is 1/n where n is the sum of the charges in the basic group. Hence, for a paired superconducting state we would expect minima in E 0 (Φ) at intervals of 1/2. These minima are related to the existence of supercurrents which are trapped in metastable states corresponding to the flux minima and are thus unable to decay away 21 . It should be mentioned that this anomalous flux quantization (AFQ) is an indication of pairing and is not in itself sufficient to imply a supercon-ducting state. Numerically the application of a flux through the double chain ring is achieved by modifying the kinetic term of the hamiltonian such that c † j,β;s c j+1,β;s → c † j,β;s c j+1,β;s e i2πΦ L (5) where Φ is the flux through the ring measured in units of Φ 0 . Hence the application of a flux is numerically equivalent to a change in the boundary conditions of the problem; Φ = 0 representing periodic and Φ = 1/2 representing anti-periodic boundary conditions. In the thermodynamic limit, σ 0 must be independent of the phase introduced at the ring boundary 22 and therefore we consider the whole of the envelope E 0 (Φ) as a function of flux (in general consisting of several parabola). Choosing the parameters n = 0.8 and J/t = 0.5, we show in figure 1a(b) all the possible spin and charge modes of the 2 × 5 (2 × 10) system, for all possible momenta, as a function of applied flux. In the case of the larger system, we show the full spectrum for Φ < 0.25 in order to simplify the diagram (this work has been previously published 9 ). For both system sizes the minimum energy is formed by charge (spin zero) bonding modes; the excited modes with different quantum numbers move further from the ground state as the system size is increased (a result we have checked by finite size scaling) and hence will not interfere with E 0 (Φ). The existence of minima at intervals of half a flux quantum (i.e. anomalous flux quantization) clearly indicates the existence of pairing). The envelope L[E 0 (Φ)−E 0 (Φ = 0)] has been extracted and is shown in figure 2a(b) along with equivalent plots from other regions of the phase diagram: Figure 2a shows the data for J/t = 1.0 and electron densities of 0.4 and 0.8, whilst figure 2b shows the data for an electron density of 0.8 for ratios J/t = 0.5 and 4.0. Apart from the curve with n = 0.8, J/t = 4.0 (which appears to scale to a flat function, consistent with a phase separated state), all the data appears to show only small finite size effects. Anomalous flux quantization is observed for the larger electron density (for the lower values of J/t) indicating pairing, and hence consistent with a superfluid state in this region. In order to determine the Drude weight, we simply calculate the average value of the curvature of L[E 0 (Φ) − E 0 (Φ = 0)] over all Φ; a quadratic curve was fitted to each portion. In figure 3a we plot the Drude weight as a function of J/t for electron densities 0.2,0.4,0.6 and 0.8 for the 2 × 10 system and electron densities 0.4 and 0.8 for the 2 × 5 system. The curves are plotted up to a maximum in J/t which is determined by the value at which the system phase separates (see compressibility). In figure 3b we plot the Drude weight as a function of electron density for various values of the ratio J/t. There are several features of the resulting behaviour we should mention: Note firstly that finite size effects are relatively small with the 2 × 5 results close to those of the 2×10 results. The Drude weight increases as the electron density is increased from zero, until it reaches one-quarter filling, then decreases with increasing electron density. Also as we would expect for a spin charge separated state, the Drude weight is effectively independent of J. B. Charge Velocity The second quantity we have calculated is the charge velocity, defined by equation 4. Considering the charge bonding modes (the lowest lying charge modes), the energy difference between the ground state energy level and the energy level with neighbouring momentum, ∆k x = 2π L was calculated. As an example, we show in figure 4a the charge bonding modes for the case n = 0.4 J/t = 2.0, indicating with solid lines the specific energy levels whose energy difference gives the charge velocity. We note that the gap (and therefore u ρ ) is approximately constant as a function of flux. For consistency with future data however, we have calculated the charge velocity at the particular flux which gives the absolute ground state, i.e. Φ = 0.5 for n < 0.5 and Φ = 0 for n > 0.5; The results are shown in figure 4b as a function of J/t for various system sizes and various electron densities (the results for n = 0.6 are not included since the numerics present some difficulty close to one-quarter filling). Note again that finite size effects are relatively small. C. Compressibility The next quantity we calculate is that of the compressibility, defined by equation 2. The finite size equaivalent is given by where E 0 (n) is the ground state energy of the finite system of ladder length L with an electron density n. As for the calculation of the charge velocity, the boundary conditions have been chosen to give the absolute ground state. ∆n represents the finite change in electron density, 0.2 and 0.4 for the 2 × 10 and 2 × 5 systems respectively. In Fig. 5 we show the results of the calculation of 1/n 2 κ for electron densities corresponding to 0.2,0.4,0.6 and 0.8 for the 2 × 10 ladder and we also plot the result of the 2 × 5 system for an electron density of 0.4. In addition to the determination of the specific values of the inverse compressibility, the boundary to phase separation may also be determined from these results. It is well known that at sufficiently large values of J/t, a system will undergo a separation into two phases; a hole-rich and an electron rich phase. This effect arises principally to minimize the number of broken antiferromagnetic bonds in the system. Phase separation occurs when the compressibility diverges (i.e. a 'liquid' to a 'solid' phase) and hence the inverse compressibility vanishes. Whilst there remain some small finite size effects, the general form of the phase separation line is readily observed from figure 5. As electron density is increased, the value of J/t at which phase separation occurs decreases (although electron densities of 0.6 and 0.8 are both indicate phase separation close to J/t ∼ 2.1). These results agree well with those of Tsunetsugu et al 10 who used a similar technique but varied both system size and electron density simultaneously. The phase separation curve can be extrapolated to all electron densities and will be shown in the predicted phase diagram, figure 8. IV. LUTTINGER LIQUID PARAMETERS FOR THE LADDER In order to explore the validity of the Luttinger liquid relations in our problem, we consider the ratio σ 0 /πn 2 κu 2 ρ which equals unity for a Luttinger liquid. The results of the numerical calculation of this quantity are shown in figure 6 for various electron densities. At low electron densities (i.e. for the cases n = 0.2 and n = 0.4), previous work has suggested that a TL phase is stabilized and the results of this ratio show good agreement with the predicted value of unity; for the case of n = 0.4 an increase in system size shows the ratio scaling towards unity. For the hole-doped spin-gapped region (n = 0.8), where the behaviour is less well understood, the system phase separates at a much lower value of J/t and hence the curve drops to zero at J/t ∼ 2.1. Before this phase separation, the data is not inconsistent with the 'new' LE-like behaviour which may be described by equations 2-4 (finite size effects are largest for high electron density and low J/t). Since the ratio is close to unity, it appears to confirm the earlier justification for using this one-dimensional theory, i.e. only a single gapless charge mode is observed. With the values of the compressibility and the Drude weight, we can obtain an estimate of the coefficient K ρ using K ρ = 1/2 √ πn 2 κσ 0 . This behaviour of K ρ for the different electron densities of the 2 × 10 ladder is shown in Fig. 7 and we have also plotted the results for a 2 × 5 ladder with electron density 0.4. Note that as J/t is increased, K ρ increases (for all electron densities) becoming infinite at phase separation as the compressibility diverges. A similar calculation has been performed by Troyer et al 16,10 for a specific electron density of 0.857, i.e. two holes on a 2 × 7 ladder and the results are consistent in this region. Before analysing the behaviour further, specifically the importance of K ρ , we present a speculative phase diagram of the isotropic t-J ladder as a function of J/t and electron density. This phase diagram is shown in figure 8 and we discuss briefly the various regions. At larger values of J/t, the system phase separates, and to estimate the value of J/t at which this occurs, we show the data points at which the inverse compressibility vanishes in the 2 × 10 system (see figure 5). On doping away from half filling, the spin gap region persists and a phase exhibiting one gapless charge mode is stabilized. On further doping a gapless phase is stabilized: the schematic boundary between these two phases is shown as a dot-dashed line. As for both the one-and two-dimensional t-J cases 24 , a gas of electron pairs is formed at low electron densities above a critical value of the ratio J/t; other 2p-particle (p>1) boundstates could also become stable at larger J/t in this region. Again the boundary to this paired phase is dot-dashed and is schematic. From the data in figure 7 we have plotted contours of constant K ρ in order to allow a determination of the dominant correlation functions. Whilst the parameter K ρ is continuous for a particular value of J/t as the electron density is varied 23 , at some region between n = 0.4 and n = 0.8 a gap opens in the spin excitation spectrum and the correlation functions change their form discontinuously, scaling to a different fixed point as explained in section II. A 'jump' in the exponent of the superconducting correlations occurs with the SCd charge exponent changing from 1 + 1/K ρ to 1/K ρ ; the 2k f CDW correlations jump from power law behaviour (exponent 1 + K ρ ) to exponential decay. In addition, in the gapped high density state we would expect conjugate 'four fermion 4k f ' CDW correlations with exponent K ρ . For both the high density 'gapped' phase and the low density TL phase we expect different correlation functions to dominate either side of the contour K ρ = 1; in both cases superconducting correlations dominate for K ρ > 1. However, in the gapped hole-doped phase, this region is much larger and in contrast to other models such as the one-dimensional t-J model 24 , does not just exist as a precursor to phase separation. A physical picture of the behaviour is of the holes pairing up on the rungs, the spins in singlets, and the dominant correlation functions are then associated with the movement of the hole pairs. We wish to thank H.J.Schulz for many useful discussions and comments; we also gratefully acknowledge many helpful conversations with M.Luchini, F.Mila, W.Hanke and D.J.Scalapino. Laboratoire de Physique Quantique, Toulouse is Unité de Recherche Associé au CNRS No 505. CAH and DP acknowledge support from the EEC Human Capital and Mobility program under Grants ERBCHBICT941392 and CHRX-CT93-0332. We also thank IDRIS (Orsay) for allocation of CPU time on the C94 and C98 CRAY supercomputers.
2019-04-14T02:15:08.592Z
1995-09-20T00:00:00.000
{ "year": 1995, "sha1": "ef080e575f34954453a77e77b99512b07f79019b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/9509123", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b5de961294425b7c8fc2aacc79c22e1d3e38356e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
119136726
pes2o/s2orc
v3-fos-license
The Theory of Three-level Photon Echo Using of Rotating Wave Approximation The three level photon echo has been described in different works by using rotating wave approximation but none of them did not get results which show the effects of field's frequencies on frequency of ground level of system. In this work, we studied a Lambda type system theoretically and numerically. By considering the Doppler effect in environment, we get different equation for polarization of echo signal and its intensity. Introduction Quantum optical data storage is a key element in quantum information processing such as quantum computing and long-haul quantum communications based on quantum repeaters. However, most modified photon echoes protocols are still limited by lack of homogenous resonance signal to explain time scales for the different systems. Photon echo spectroscopy is the most important way which can deeply extract microscopic information about the time scales of molecular and collective dynamics of condensed phase. Photon echo spectroscopy can cover any inhomogeneous broadening in state. The three-level photon echo is made of three different signals in a medium. Among these levels or dots distributed homogenously resonance frequencies. In the present paper, by using rotating wave approximation, we showed that proportion of detuning frequencies is equal with proportion of frequencies between degenerated and ground levels. Our method to solve relevant equations is acceptable but simpler than the other methods. This equation can be used in order to determine the resonance form of echo signal. As has been shown in previously works, signal resonance is depending on frequencies between degenerated and ground levels. After finding the relationship between frequencies and their detuning amounts, its obvious to predict an equation for delay time as ratio of frequencies. Polarization of signal usually depends on frequencies between levels and frequencies detuning. This relationship gets an exponentially resonance for signal polarization between ground level and second degenerated level. By using this results as numerical data, we found an exponentially function for intensity as frequencies, signal delay time and distance of ground level from two degenerated levels. As has been shown previously, it is possible to write equation for frequency detuning between laser fields either as function of frequencies or as function of frequency distance between ground level and second degenerated and frequency detuning between laser field and transition frequency. In the last section of paper, the effect of frequency distance between these two levels on the general intensity of output signal has been proved. It seems that these results have strongly consistent with theoretical background and other same works. In other words, our results for the effect of frequency distance between ground and low degenerated level can give decay coefficient of signal in different cases of system and position of echo signal in the long timescale for the case of qusi-degenerated levels can be predicted. Theoretical Framework As using strength field for transition between degenerated levels and ground level, the reaction time of pulses increase which means decay of signals memory time. But this system have Λ−type structure in three-level atoms with long-life lower-levels. These lower-levels are fine-structure components which the interaction between them is so weak and is only due to spin-spin structure. This can help to create a long-living houl for using as information/signal memory. In this system we have two strong femtosecond pulses and a detuning pulse. In the real systems, delay time between detuning driving pulses is 500microsecond which compared to pulse time (100femtosecond) increases the signals memory time 10 9 times more than signals time. In order to prove this process we studied a three-level system. The system considered a three-level Λ−type system with a ground state (2). The transition2 ↔ 3 is driven by frequency ω 23 and Rabbi frequency G c .The transition 2 ↔ 1 is driven by frequency ω 21 and Rabbi Frequency g c . The rates of emission from 1 and 3 are denoted by 2γ 1 and 2γ 2 so detuning amounts of the probe and coupling fields are ∆ 1 = ω 12 − ω 1 and ∆ 2 = ω 23 − ω 3 . From the Liouville equationρ = i [H, ρ] − γρ Where H-Hamiltonian system of atom and field and ρ-element of density matrix. The matrix density equations for three-level system interacting with a pulsed laser [4,5] in a semi classical dipole rotating-wave approximation are Where parameter ρ ii represented the population of i level and ρ ij represented population between levels, V = −D.E laser cos(ω 2 t) is interaction of medium with laser field,D=dipole moment and d= frequency distance between two degenerated levels. We introduced the Rabi frequencies as g p = g 0 e iΦp and G c = g 0 e iΦc where is phase of frequency. There are several situations in this case: In the first situation laser fields are polarized likely and Figure 1: Λ-type system with driving and probing field in the second they are polarized differently. In general words, can say that phases have small influence on the photon-echo signal. We calculate the time of echo for ensemble of atoms in gas; in which occurs non-uniform spread of frequencies with the Maxwell velocity distribution due to the Doppler Effect, which means proportion ∆ 12 ∆ 23 = ω 12 ω 23 is valid where∆ 12 = ω laser1 − ω 21 , ∆ 23 = ω laser2 − ω 32 . During the first emission t 0 to t 12 and second t 12 to T (second pulse and echo), level 3 does not expose to field and phases of atoms in this level acquire phase shift (T − t 0 ) * ( 12 − 23 ) υ c . But phase shift at level 2 does not occur not during first nor second emission [7]. Therefore for level 2 general phase shift during emissions is υ * ω 12 c (T − t 12 ). Polarization during emission from 2 −→ 3 is driven asP The echo signal occurs if phase treats to zero, so we can write equation for signal delay time as:T = t 0 + ω 12 ω 23 * t 12 . By using the elements of density matrix and solving Bloch equation for main its elements(x,y,z), equation for polarizations in x ,y and z directions can be written as P x = P z = 0. From the equation.1, its obvious that Wherek i υ i and k i υ i are the wave coefficients and frequencies of different pulses. By using Gaussian probability functions as f (∆ωt) for inhomogeneous distribution of velocity in system, we can write: In the result of this methodP y ∼ P 0 exp [− T * ]. Where T * is halfwidth Gaussian distribution. Numerical Framework In this section, we give some numerical simulations by using RWA expression for the state population given from Sec .II. The relevant system properties are length of the signal pulses of 100 fs, the probabilities of relaxation transitions are denoted by γ 21 and γ 23 ∼ 10 10 c −1 , Homogeneous broadening Γ ∼ 10 12 c −1 , γ 13 = 0, Γ 13 = 0, ∆ 12 = 50, ∆ 12 ∆ 23 = 1. When the pulse resonates with this excited state, dynamics contributions of excited and ground states induce signal. The polarization spectrum of nonlinear system is found from Fourier transform of elements of density matrix. Distance between control pulses for weak degenerate levels b =0.05, c=0.10, a=0.02(Signals polarized differently) As it has been shown in figure.3 intensity of signal is depends on the delay time exponential. This figure has been gotten by using parameter of distance in equation for detuning probe field [2]. the intensity of echo signal depends on the modulation index of process grating which led to a reduction of the echo signal.Based on [7] during delay time T, spectral diffusion cases frequency grating to earase and their magnitude to decay in time.The results of our simulation are shown in figure.2. They show a maximum in the signal intensity around T.This peak in related to coherant pump-probe signals.It's obvious that there is no fast decay for intensity after separation of third signal from beams in time.So we have good decaying behaviour on long timescales. In this figure we have periodic exponentially decay for signal intensity. The range of quantum beats depends on the frequency distance between levels (d). This phenomenon has been observed experimentally and used in echo spectroscopy to find the weak degeneracy in atoms. Difference between figures of signal intensity in two cases (case of polarization) is in place of periods. Signal in these two cases occurred likely but in different times. This difference is equal with coefficient γ. For explain this decrease, we use energy changes of a photon which adds to and removes from the beam, due to the decrease of the upper and lower level. For the number of photons N p of the beaṁ Where B is proportionality factor in transition probability between the energy levels (i and j) with degeneracies depend on ω: N 12 = ω 2 ω 1 N 1 − N 2 For our system B and are equal to 1, so we havė for spectral energy density: But I ∝ ρ also I = Energy Areatime = I 0 e ( − ω 2 ω 1 γ 12 ) = I 0 e at The position of photon echo signal is proportional to the detuning time between driving pulses. We can get a decay coefficient from this figure as 12 23 γ 12 = a which has a good agreement with theory [1]. When distance between 1 and 3 is zero means that they have the same ground level, function of I is like δ(t 1 2γ) -function. Intensity in this case decrease rapidly and there is only one max point in figure at first part signal. This figure is same as intensity of signal when energy of ground level is zero [18]. You can see that for d = ∞ fairly there are not any quantum beats and we have a very slow population relaxation compared with theoretical coefficient of signal decay. There are some remarkable characteristics of 3-level photon echo by using fluorescence excitation measurements in figures.2and 3. A direct relationship of spectral diffusion has been provided by the variation injections rate measured as a function of time between 1 −→ 2. Different view of figure.3 shows the importance of distance between two degenerate levels on signal density. Result In this paper we developed a general theory of three-level system by assuming rotating wave approximation and gave an analytical solution for problem of frequency distance between ground and low degenerated level. Moreover in the three-level system we gave some approximate solutions which are important in order to analyze experimental data. We carried out numerical simulation of an atomic medium to the field of two driving pulse and one signal pulse in Λ-scheme. It was shown that the time of two long-wavelength three-pulse echoes is proportional with ratio of detuning of the resonance lines of transitions in a non-uniform distribution. As the result of numerical simulation for RWA, the dependence of the signal from the delay time in weak degenerate levels. These results are consistent with experiment and theory. By this time all figures and theories have neglected relationship between signals intensity and detuning amounts of fields, but in this work, we proved linear signature. In general words, photon echo spectra are found to be strongly dependent on frequency of pump and probe frequencies of system also their dependence with detuning amounts of frequencies. During the last part of simulation, are found the importance effect of frequency distance between two degenerate levels (d) on the delay time of signal. This dependence could help us to make coherent distribution of signal in system. For example if have been investigated investigate the characteristics of a quantum dot, we find out that if first impulse has time interval of 100fs, we can increase the echo time by increasing the ratio of detuning time frequencies and frequency distance between two degenerated levels. Basic on the calculations the ratio of pulse signal to interval between 1 and 2 is T pulse t 12 ∼ 10 −6 . In other words by using the effect of frequency distance on the time of echo signal occurrence, its possible to increase memory time of this dot to 500s. Increasing memory time of quantum dots is one of the most important tasks of quantum electronics.In order to use this method for quantum dots, signal intensity has to be as strong as to give g p t 12 ∼ π(Impulse area). In other words, we have to collect a system from ∼ 5 × 10 15 atoms.
2014-05-29T19:39:59.000Z
2014-05-29T00:00:00.000
{ "year": 2014, "sha1": "b85ba7ac771113e630b1c092c62901c1ed2b5679", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "089b3c2610f572a58a8c9bd0e7c11413569a6faf", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
221381716
pes2o/s2orc
v3-fos-license
Approaches to management of cardiovascular morbidity in adult cancer patients – cross-sectional survey among cardio-oncology experts Background In cardio-oncology, a range of clinical dilemmas can be identified where high-quality evidence for management is still lacking. The aim of this project was to study clinical practices and expert approaches to several clinical cardio-oncological dilemmas regarding prediction, prevention and treatment of cardiovascular disease in adult cancer patients. Methods A cross-sectional online survey was sent out to internationally renowned experts in the field of cardio-oncology. Participants were selected based on being first or last authors of papers in the field of cardio-oncology, or principal investigators to trials in this field. Results Topics discussed include, among others, the use of biomarkers for subclinical cardiovascular toxicity, approaches towards primary prevention and follow-up with medication and life-style recommendations, and management of fluoropyrimidine-vasospasm, QTc-prolongation and asymptomatic declines in left ventricular ejection fraction. Conclusion The answers provided in this survey have shed light on expert-based practices in cardio-oncologic dilemmas. Attitudes towards, as well as discrepancies in those dilemmas are presented. Existing discrepancies clearly indicate the need for generation of high-quality data that allows for more evidence-based recommendations in the future. outcomes. In addition, decreasing late, or chronic, treatment-related toxicity is vital to maximize healthy survivorship and improve quality of life in cancer survivors [5]. The cardiovascular system is at particular risk for cancer-treatment related complications, both on the short and long term. Over the past decades, a global cardio-oncology effort fed by a large society of clinicians and researchers has focused on understanding and ameliorating cardiovascular effects that can occur in cancer patients. Mechanisms of toxicity have been unraveled, and newer projects focus on cardiovascular toxicity from contemporary oncological treatments, such as checkpoint inhibitors. Several intervention studies on primary prevention have been performed [6][7][8][9][10][11], and some predictive models have been composed to identify cancer patients at increased risk for treatment-related cardiovascular disease (CVD) [12][13][14]. Despite those advances, clinical dilemmas can be identified and differences in clinical practice exist. The mostly used guidelines on monitoring risk and cardiac dysfunction from the American Society of Clinical Oncology, Canadian Cardiovascular Society [15] and European Society of Cardiology [16] provide expert-based summaries of the relevant literature on risk assessment, prevention and early detection up to 2016. However, due to lack of evidence, they do not cover many of the specific clinical situations that physicians treating patients with toxic cardiomyopathy due to anti-neoplastic treatment encounter, and lack recommendations on which patients should, for example, receive primary prevention by means of prophylactic use of beta-blockers, angiotensin converting enzyme inhibitors (ACEi) and more [17]. The very recently published guideline from the European Society of Medical Oncology provides an excellent overview on a number of clinical dilemmas and concisely summarizes the available literature on these topics, while acknowledging the paucity of conclusive data to formulate firm management recommendations [18]. Hereby, we report findings from a cross-sectional online survey that was performed among cardio-oncology experts. The aim was to study clinical practices and expert approaches to several clinical cardio-oncological dilemmas regarding prediction, prevention and treatment of CVD in adult cancer patients, where evidence for definitive management recommendations is as of yet scarce. Composition of questions in cross-sectional online questionnaire The cardio-oncology team at the Karolinska University Hospital is composed of clinically active oncologists (EH, AP, RA) and cardiologists (AMB, LH [prev]), and has scheduled meetings at regular intervals to discuss patients with active oncological and cardiovascular conditions. The survey-questions included in the online questionnaire were composed by the cardio-oncology team based on clinical dilemma's that arise from the multidisciplinary discussions and were approved by all team members. Selection of candidates for online survey Suitable candidates for the cross-sectional online questionnaire were selected based on a literature review on PubMed. Search words for selection of the articles included 'cardiotoxicity', 'oncology', 'cardiovascular toxicity', 'cancer', 'cardiooncology', 'cardio-oncology'. Papers written in English from January 2000 up to September 2018 were included. The first and last authors were added to the list of candidates, if their email address could be retrieved from the corresponding author list, personal connections or an Internet search. In case these contact details could not be retrieved, the corresponding author of the article was instead added to the candidate list. In addition, the website clinicaltrials.gov was searched for studies related to cancer therapies and cardiotoxicity/cardiovascular toxicity, including studies on imaging, biomarkers, systemic therapy, radiotherapy and interventions. The Principal Investigators of these trials were added to the candidate list. Online questionnaire The questionnaire consisted of 21 multiple-choice questions and one open question requiring a written response. All questions and possible multiple-choice answers are presented in Table 1. For each question, only one of the answers could be selected. As an alternative to the provided answers for every multiple-choice question, there was a possibility to select 'other' and write free text instead . The online survey was created at the platform 'KI survey', designed and maintained by the Karolinska Institute, Stockholm, Sweden. This platform is an independent service; data are stored on a protected database and are not used for commercial purposes. Email addresses are entered in the platform to distribute the invitations to the selected candidates. The email addresses are not connected to the completed surveys, enabling anonymity of the survey responders. The survey was sent out November 2nd 2018, and two reminders were sent to the non-responders (November 17th and November 28th). Demographics Ninety-three respondents (25%) of the 372 experts approached, completed in the online questionnaire ( Fig. 1). Of these 93 respondents, most were clinicians ( Fig. 1, Table 1). The remaining 9% had positions such as nurse-researchers, pharmacologist, clinical and psychological researchers. Figure 2 provides an overview of the answers, and Additional file 1 provides the exact number of responses in the multiple-choice questions and the responses given in the open text fields. Ninety percent of the participants worked at a university/teaching hospital, and most had over 10 years of clinical experience whereas 4% still were in Table 1 Questions in online survey. For each question, only one of the answers could be selected. As an alternative, there was the possibility to write free text instead of choosing one of the available options for every multiple-choice question. In the last column, the percentage response to the respective options is given. Additional file 1 includes the remarks that were given at in the open text field at the option 'other' (Continued) Table 1. For each question, only one of the answers could be selected. As an alternative, there was the possibility to write free text instead of choosing one of the available options for every multiple-choice question training. About one third had over 10 years of experience in cardio-oncologic care and/or research. Little over half (of the respondents were male. Organization of care Most of the respondents had access to multidisciplinary conferences at their institution; either as scheduled meetings (41%) or ad-hoc discussions and referrals (46%), but a small percentage (5%) mentioned lack of availability for multidisciplinary interaction. The majority of respondents had ongoing clinical trials in the field of cardio-oncology at their clinic. Moreover, little over half had relevant pre-clinical and/or experimental studies that were currently ongoing. Primary prevention during oncological treatment Preventive medication before and during oncologic treatment The question asked to the experts was 'Do you prescribe preventive medication before start of a potentially cardiotoxic oncological treatment (e.g. anthracyclines, trastuzumab) in patients with a normal left ventricular ejection fraction (LVEF) and without uncontrolled risk factors for CVD? About half of the respondents do not routinely prescribe preventive medication before start of a potentially cardiotoxic oncological treatment in patients with a normal LVEF without uncontrolled risk factors for CVD. One in ten routinely initiate treatment with either an ACEi or angiotensin receptor antagonist (ARB) or a combination of ACEi or ARB with beta-blocker/statin and/or anticoagulants and 17% prescribe preventive medication only in case of an estimated increased risk for development of cardiotoxicity, based on published risk estimation scores [12,13]. Some respondents mentioned a cardiologist should decide in this matter, and one respondent mentioned he/she applies the coronary artery calcification score (CAC) for decision-making. Life-style interventions before and/or during treatment initiation Almost three-quarter of respondents report providing recommendations on healthy lifestyle, such as physical exercise, weight loss and smoking cessation. On the contrary, 5% reportedly refrain from such recommendations due to lack of robust scientific evidence to support such interventions, and equal number does not give such recommendations due to time limitations. Some respondents report to offer exercise counseling to their patients. Primary prevention during follow-up Strategies for prevention of CVD after cancer treatment completion More than half of respondents (60%) have a routine follow-up schedule for patients treated with potentially cardiotoxic treatments. The majority uses the ASCO guidelines on prevention of cardiac dysfunction in cancer survivors, alternatively constructed local/regional guidelines. Interestingly, almost half of the respondents consider a cardio-oncology outpatient clinic the appropriate forum for cardiovascular risk management (CVRM) in cancer survivors. Some respondents indicate that this should be primarily performed by a general practitioner, and few mention that this is a joint effort between general practitioners and cardiologists. Only one respondent deems the patient itself should be responsible for CVRM. About half of respondents regard cardiovascular risk factor management according to general CVD guidelines the best prevention strategy of future development of CVD in patients treated for a malignancy. Almost a quarter sees a role for preventive medication, such as ACEi, ARB or beta-blockade in prevention of future CVD, whereas one fifth of all respondents would prescribe such medications only to patients with increase in troponins and/or NT-proBNP. Moreover, some support a primary role in life-style modifications such as weight loss, increased physical exercise and smoking cessation. Table 2 provides an overview of the answers to the following eight questions subdivided by respondents' occupation, i.e. oncologists vs. non-oncologists and cardiologists vs. non-cardiologists. (neo-) adjuvant systemic therapy for breast cancer Little less than half of respondents recommend combination chemotherapy with anthracyclines and taxanes for a patient with a previous cardiovascular event, with controlled symptomatology and a normal LVEF, whereas about a quarter would recommend non-anthracycline containing chemotherapy. Oncologists tended to somewhat favor non-anthracycline based chemotherapy schemes compared to the rest of the respondents (Table 2). Asymptomatic LVEF declines during trastuzumab treatment In patients treated with trastuzumab (in curative and/ or palliative setting) who develop an asymptomatic left ventricular ejection fraction (LVEF) drop to < 50% but ≥45%, almost two third of respondents will continue trastuzumab treatment, were half of this group does so after consultation with a cardiologist. Only some report they never continue trastuzumab under those circumstances. The oncologists as a group respond quite similar compared to the rest of the respondents. Prolonged QTc About half of respondents continue treatment with continued monitoring of QTc in case a patient develops a prolonged QTc under systemic oncological treatment, whereas some responded they never check QTc during anti-cancer treatment. A quarter of respondents interrupts oncological treatment and re-initiates therapy after QTc has normalized. Some respondents nuance the answer by stating that their approach strongly depends on the length of the QTc, and one continues anti-cancer treatment but discontinues supportive medication in the first hand. Interestingly, one third of oncologists reports never to check QTc during anticancer treatments (Table 2). Cardiac ischemia related to fluoropyrimidines About one third of respondents will not continue treatment with fluoropyrimidines (5-FU, capecitabin) after a patient has developed an acute coronary syndrome (Table 1), whereas almost half of the oncologists would not do so (Table 2). One in ten respondents in the group as a whole re-initiates therapy after a patient has received treatment with a calcium-antagonist, and about a quarter would only do so in case there was no enzymatic myocardial infarction and adequate anti-ischemic treatment is initiated. For this question, a quarter of respondents selected the option 'not applicable'. One respondent mentioned doing genetic testing (presumably for DPYD-deficiency [RA]), one reported having a rechallenge protocol and one would only restart in case of clean angiography. Management of CVDcardiovascular treatment decisions Blood pressure during anti-VEGF therapy The majority of respondents aim for a systolic blood pressure < 140 mmHg in patients with anti-Vascular Endothelial Growth Factor therapy (i.e., bevacizumab, tyrosine kinase inhibitors); over 80% of cardiologists agrees on this (Table 2). Only one respondent indicated he/she would initiate treatment in case of clinical symptoms of hypertension and/or proteinuria. Novel oral anticoagulants (NOAC) for patients on oncological treatments Most respondents prescribe NOACs for atrial fibrillation and/or venous thrombo-embolisms, whereas some (12 and 2%) do so for the respective indications. About one tenth of respondents (11%) does not prescribe NOACs because of other possible interactions with oncological treatments or increased bleeding risk, whereas a number of respondents answered that the decision of treatment with NOAC is highly dependent on various factors such as indication, underlying malignancy and oncological treatment. Biomarkers for subclinical cardiovascular toxicity About one quarter of respondents uses LVEF only for clinical decision-making, whereas little over half reports using additional biomarkers for this purpose. The most frequently used biomarkers are the circulating biomarkers NT-proBNP and troponins, followed by global longitudinal strain on echocardiography. In addition, about 17% of respondents indicated combination of different biomarkers for subclinical cardiovascular damage for clinical decision-making, mainly the combination of circulating biomarkers and strain. Cardiologists seem to favor use of other biomarkers in addition to LVEF more than the other group of respondents (Table 2). Implantable cardioverter-defibrillator (ICD) and/or cardiac resynchronization therapy (CRT) for cancer treatmentinduced heart failure Over half of all respondents (59%) consider placement of an ICD and/or CRT for patients with cancer treatmentinduced heart failure, whereas 12% indicate they do so depending on the prognosis of the malignancy. Cardiologists are clearly more in favor of considering ICD compared to the other respondents, as is apparent from Table 2. Discussion In this article we describe the results of a cross-sectional online survey among cardio-oncology experts. The aim of our project was to study clinical practices and expert approaches to several clinical cardio-oncological dilemmas regarding prediction, prevention and treatment of CVD in adult cancer patients, where definitive management recommendations are as of yet lacking. By sending out this online questionnaire, we were able to obtain many highly valuable insights into clinical cardio-oncology practices from international experts. We conclude that it is feasible to perform such a survey, as has been done in other fields of medicine among health-care professionals [17][18][19]. The fast majority (80%) of respondents were physicians, most of them cardiologists (50% of all respondents), with a long period of professional expertise as three quarter had over 10 years of professional experience. This is expected based on the method we chose to select potential respondents, those being first and/or last author of papers published in the research area of cardio-oncology. We believe that this group of respondentshas a firm clinical and scientific basis to provide insights in the clinical dilemmas discussed in our survey. One in six respondents mentioned to use baseline predictive models for future CVD to decide on prescription of medication as primary prevention before start of an oncological treatment. Reliable predictive biomarkers, that enable an estimation of the risk of future CVD before an oncological treatment is initiated, are highly needed. Such biomarkers enable selection of patients who are candidates for primary prevention, intensified life-style modifications and/or screening for subclinical CVD. As of now, some published studies have investigated such predictive models in breast cancer patient cohorts [12][13][14], but methodological limitations and especially lack of external validation in independent patient cohorts make it as of yet hard to implement such scores in routine clinical practice. The possibilities and limitations of such predictive models are also summarized in the previously mentioned guidelines [15][16][17][18]. The majority of respondents mention that they provide recommendations for optimization of life-style related risk factors before a cancer treatment is started, and around 10% of respondents sees a role in interventions targeting life-style modifications for prevention of future CVD after completion of oncological treatments. Such risk factors for CVD include obesity, smoking, hypertension and dyslipidemia. Presence of those risk factors, that can actually be seen as shared risk factors for the development of CVD and cancer, is associated with a higher risk of treatment-related cardiovascular toxicity [12,19,20]. The guidelines on management and prevention of cardiac dysfunction in cancer patients [15][16][17][18] underscore the importance of screening for, and educating cancer survivors about, lifestyle modifications such as physical exercise and dietary habits leading to prevention of weight gain. None of the guidelines mentions smoking cessation, but this might also exert positive benefits for cardiovascular health in cancer survivors as well as decrease future cancer risk. An EBCTCG meta-analysis indicated an increased risk for late radiotherapy-induced cardiac toxicity in smokers [21], supporting pro-active recommendations for smoking cessation. At the moment, no definite conclusions can be made on to what extent interventions targeting physical exercise or weight loss during or after cancer treatments decrease CVD in cancer survivors. To the best of our knowledge, there are no studies with a physical exercise intervention that used cardiovascular outcomes as primary endpoint. Some exploratory analyses of the existing studies have shown beneficial outcomes in terms of cardiovascular outcomes, such as preserved cardiorespiratory fitness [22], lower insulin levels [23] and less treatment-related tachycardia [24]. Intervention trials aiming to ameliorate treatment-related cardiovascular toxicity by a physical exercise intervention are ongoing. A Canadian study (NCT03131024) investigates the role of caloric restriction and physical exercise in LVEF reserve 2-3 weeks and 1 year after adjuvant treatment completion as primary endpoint. In addition, a French study (NCT02433067) has been initiated in patients undergoing systemic adjuvant treatment for early HER2positive breast cancer, where patients are randomized between physical activity or standard of care; change in LVEF from baseline to 6 months after treatment start is primary endpoint of this trial. Over half of respondents use, in addition to LVEF, (combinations of) additional biomarkers for subclinical cardiovascular toxicity in their clinical decision-making. The most used ones are circulating biomarkers, NT-proBNP and troponins, and global longitudinal strain on echocardiography. In addition, one in five of respondents added in the open text response option that they used combinations of thosein some cases combined with cardiac magnetic resonance imaging. The ESMO guideline discusses the role of the different biomarkers, and concludes that routine use of cardiac biomarkers (cardiac troponins, BNP and NT pro-BNP) for patients undergoing potentially cardiotoxic chemotherapy is not well established [18]. In addition, the guideline mentions the potential usefulness of strain on echocardiography, but no statements are made as to whether the use of such parameters can already be implemented in the clinic. Ideally, a combination score of (temporary) changes in biomarkers for subclinical cardiovascular damage could potentially serve as a tool to identify patients at increased risk for future CVD. To the best of our knowledge, studies investigating such biomarker combinations have as of yet not been initiated. An important finding in our survey is the fact that there is quite some heterogeneity in management and treatment decisions regarding the clinical dilemmas discussed in the question, e.g. the routine use of biomarkers, management of fluoropyrimidine-vasospasm and primary prevention strategies. This heterogeneity could result in differences in standard of care, depending on the site where a patient is treated, which is an undesired situation and warrants the need for firm evidence supporting these clinical dilemmas. The possibility of performing digital, web-based surveys where participants can participate in an anonymized setting has resulted in a widespread use of such instruments, in all kinds of different societal settings. In medicine, web-based questionnaires have been used to study common practices and physician's attitudes towards a large number of different items. We selected potential respondents based on academic merits, and the responses in the survey revealed that the responders constituted a group of mainly physicians with a long clinical experience within cardio-oncology. Because of the nature of multiple-choice questions, being able to select only one answer that one regards most fitting, will there be a loss of nuance and information obtained from the survey. However, we tried to minimize this by providing possibilities for selecting a combined answering option, as well as an open text field option. The applied method of selecting potential participants can be a source for bias as persons actively involved in cardio-oncology research may tend to favour a certain intervention that they have been studying. This could even be more pronounced as it is more likely that positive studies are submitted and accepted for publication. Additional sources for bias may originate from respondents being co-authors, coming from the same institution as well as presence of different of medical payer and provider systems that can put different constraints on practice. A quarter of all invited respondentscompleted the survey after the initial invitation and two reminders. No financial or other incentives were provided, which might have resulted in higher response rates [25], and neither was an answering tracking system used, in line with findings that the use of this may negatively impact response rates [26]. Our response rate is somewhat lower than studies that have investigated response rates of physicians to web-based surveys, where percentages between 29 and 35% were found [25,27]. Because of this response rate, we cannot exclude the possibility that nonresponse bias has influenced the current findings [28]. The questions selected for this survey have been composed by the clinical cardio-oncology team at our institution, with the aim to identify practices in clinical dilemmas in cardio-oncology. There may be suboptimal formulations and possibly, additional answers would have been even more appropriate to obtain maximal insight in this matter. For this reason, we provided open text options at each question. Individual responses to the respective questions are given in Additional file 1. The option 'not applicable' (N/A) was added to enable respondents to refrain from answering a question that was beyond their expertise. This might also have complicated the interpretation of the findings, because respondents unsure of the best option may also have made an 'educated guess' instead of selecting N/A. The survey was specifically designed to be generic and not to focus, with the exception of one question on breast cancer therapy selection, on specific tumor types. Doing so may have impacted the response interpretation. Conclusions In conclusion, a web-based survey was conducted among cardio-oncology experts to explore attitudes towards clinical dilemmas related to prediction, prevention and management of treatment-related cardiovascular toxicity in cancer patients, where high-quality evidence from the literature is still lacking. The answers provided in this survey have shed light on expert-based practices in clinical cardio-oncology dilemmas. Attitudes towards as well as discrepancies in those dilemmas are presented. Existing discrepancies clearly indicate the need for generation of high-quality data that allows for more evidence-based recommendations in the future.
2020-09-01T13:50:53.673Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "551d7ff2e7e0e45472d9c1aaf41beb853d2eb187", "oa_license": "CCBY", "oa_url": "https://cardiooncologyjournal.biomedcentral.com/track/pdf/10.1186/s40959-020-00070-y", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "551d7ff2e7e0e45472d9c1aaf41beb853d2eb187", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
23939865
pes2o/s2orc
v3-fos-license
Neuroligins Nlg2 and Nlg4 Affect Social Behavior in Drosophila melanogaster The genome of Drosophila melanogaster includes homologs to approximately one-third of the currently known human disease genes. Flies and humans share many biological processes, including the principles of information processing by excitable neurons, synaptic transmission, and the chemical signals involved in intercellular communication. Studies on the molecular and behavioral impact of genetic risk factors of human neuro-developmental disorders [autism spectrum disorders (ASDs), schizophrenia, attention deficit hyperactivity disorders, and Tourette syndrome] increasingly use the well-studied social behavior of D. melanogaster, an organism that is amenable to a large variety of genetic manipulations. Neuroligins (Nlgs) are a family of phylogenetically conserved postsynaptic adhesion molecules present (among others) in nematodes, insects, and mammals. Impaired function of Nlgs (particularly of Nlg 3 and 4) has been associated with ASDs in humans and impaired social and communication behavior in mice. Making use of a set of behavioral and social assays, we, here, analyzed the impact of two Drosophila Nlgs, Dnlg2 and Dnlg4, which are differentially expressed at excitatory and inhibitory central nervous synapses, respectively. Both Nlgs seem to be associated with diurnal activity and social behavior. Even though deficiencies in Dnlg2 and Dnlg4 appeared to have no effects on sensory or motor systems, they differentially impacted on social interactions, suggesting that social behavior is distinctly regulated by these Nlgs. Drosophila may serve as a suitable organism to study the basis of these diseases since it performs elaborate social interactions, such as courtship (20,21) and aggression with the establishment of social dominance (22,23), uses intraspecific acoustic communication (24), establishes long-term memory in classical and operant learning paradigms (25), and performs sensory-motor tasks with great precision (26). Drosophila offers molecular and genetic tools to identify the functions of individual genes and proteins, their interaction partners within cellular/molecular pathways [recent summary (27)], and their impact on physiology and behavioral performance. Homologs of disease-related genes can be mutated globally or in particular tissues or cell types, and transgenic flies may express human genes with or without characteristically disease-related mutations. Whether, and how, such genetic alterations impact Drosophila social behavior needs to be assessed in a quantifiable manner. Neuroligins (Nlgs) are postsynaptic adhesion molecules that typically associate with presynaptic neurexins to form bidirectional signaling complexes required for the correct formation, maturation, and functional adjustment of chemical synaptic connections between neurons (28)(29)(30)(31). Additional neurexinindependent synaptic functions have also been reported [reviewed by Reissner et al. (32)]. In mammalian nervous systems, different Nlgs are differentially expressed at different types of synapses [reviewed in Ref. (33)(34)(35)]. Nlg1 is predominantly expressed at excitatory glutamatergic synapses, fostering the accumulation of postsynaptic density proteins and ionotropic and metabotropic glutamate receptors (36). Nlg2 is selectively expressed at inhibitory synapses, where it associates with gephyrin and recruits GABA or glycine receptors (37). Nlg3 and Nlg4 appear at both excitatory and inhibitory synapses with preferences of Nlg3 for GABAergic and Nlg4 for glycinergic synapses (38)(39)(40)(41)(42). Alterations in nlg genes were found in patients affected by ASDs (40)(41)(42)(43) and mutations of the same genes caused autism-like phenotypes in rodent model organisms (35,44). ASD represent neuro-developmental disorders that cause impairments in social interaction and communication accompanied by restricted and repetitive behaviors. Especially mutations in nlg3 and nlg4, mutations in genes encoding direct interaction partners of Nlgs, such as neurexins and shank, and alterations of other proteins involved in synaptic mechanisms are directly associated with ASD (44)(45)(46). Based on the differential expression of Nlgs at different types of synapses, it was hypothesized that ASD phenotypes may result from disturbed balance of excitatory and inhibitory synaptic transmission in brain regions controlling respective behaviors and functions (47). Supporting evidence for this hypothesis derived from both ASD patients [review by Dickinson et al. (48)] and studies on rodent models for ASD (46,(49)(50)(51)(52). Much like in mammals, insects present multiple neuroligin genes (38), whereby the four genes found in D. melanogaster (nlg1-nlg4) show differential expression at central and peripheral nervous synapses. None of these genes has a particular similarity to the four mammalian nlg genes and the designations (vertebrate nlg1-4 vs Drosophila nlg1-4) do not imply phylogenetic relatedness. Previous studies have shown that Dnlg2 is predominantly expressed by excitatory postsynapses (53), while Dnlg4 is abundant at inhibitory synapses (54). Immunostaining with a Dnlg2 antibody in the adult Drosophila brain shows that the protein is abundant in the mushroom body and the central complex (W. Xie, personal communication). Both these brain structures are involved in the control of diverse behaviors like short-term courtship memory, center avoidance, olfactory learning, sleep regulation, and spatial orientation (55)(56)(57)(58)(59). Antibody staining against Dnlg4 revealed high expression in the lateral clock neurons (LNvs), possibly explaining the abnormal sleep behavior found in dnlg4-mutant flies, as well as expression in the central complex (54). Earlier studies (16) revealed that deletion of the dnlg2 gene alters social behavior in Drosophila. While retaining intact sensory perception, Dnlg2-deficient flies display reduced social interactions with respect to male-female courtship and male-male agonistic behavior, produce altered acoustic communication signals, and often fail to terminate behavior upon context changes. In the present study, we subjected flies deficient in Dnlg2 (typically expressed at excitatory synapses) or Dnlg4 (typically expressed at inhibitory synapses) and wild-type D. melanogaster (wt) to a series of behavioral assays that assess their social interactions (courtship, aggression, group formation, reaction to conspecific songs) and, in addition, analyzed their acoustic communication signals. We also tested for motor and sensory defects by analyzing locomotion, open space avoidance, circadian activity, and the sound sensitivity or their hearing organs. Our results show that both Dnlg2 and Dnlg4 are implicated in Drosophila social behavior. MaTerials anD MeThODs animals All flies were reared at 25°C temperature with 60% humidity under a 12:12-h dark/light cycle and on standard medium, which was made from 500 g fresh yeast, 500 g sugar, 20 g salt, 60 g agarose, 250 g flour, 1 l conventional apple juice (Alnatura, Bickenbach, Germany), and 30 ml propionic acid. Water was added so that the medium would amount to 7 l. Studies were performed with the Dnlg2-deficient mutant line dnlg2 KO17 (provided by Wei Xie, Southeast University, Nanjing, China), generated by targeted knockout of the dnlg2 genomic locus (53). Studies with a second Dnlg2-deficient line [dnlg2 KO70 ; (53)] that was tested in some of the assays generated qualitatively similar results. The Dnlg4-deficient mutant line was generated by crossing a dnlg4 del (54) deletion and a dnlg4 point mutation (dnlg4 LL01874 ) line (both provided by Junhai Han, Southeast University, Nanjing, China) (54), as both of them are homozygously lethal. We note that both fly lines are hypomorphs and still express limited amounts of the respective Nlg (less than 30% compared to wild type). Canton-S was used as wild-type control, and all mutant lines were kept as "Cantonized" lab stocks (dngl2-mutants were outcrossed for six generations in-house; dnlg4-mutants were obtained as outcrossed). Unless otherwise stated, flies were tested at the age of 5-7 days. acquisition and analysis of locomotion Data were transferred individually in a circular arena of 40-mm diameter filled with 1% agarose/1% glucose and closed with an anti-glare Perspex plate. A distance of 2 mm between pane and medium allowed the flies to walk freely but prevented them from flying (see Figures 1A,C). The arena was produced using an Ultimaker 3D printer (Ultimaking Ltd., Geldermalsen, Netherlands) and movies were recorded using TroublePix software (NorPix Inc., Montreal, QC, Canada) and a MotionTraveller 500 camera (IS, Imaging Solutions GmbH, Eningen, Germany) at 500 frames per second (fps). The flies were illuminated from below with infrared LEDs (Pollin Electronic GmbH, Pförringen, Germany; #531090). Full LED illumination caused a temperature increase of ca. 0.01°C per minute. Because animals were allowed to spend maximally 5 min in the arena, the corresponding temperature change they experienced during their stay was ca. 0.05°C. Post hoc analysis of the video footage was performed using ivTools (Dr. Jens P. Lindemann; Bielefeld University) to acquire walking trajectories. To deduce fly-based velocity combinations from the trajectory, we used unsupervised k-means clustering to classify data points into a set of "k" clusters (60). The fkmeans function for Matlab authored by Tim Benham was used. 1 Probability Density in a circular arena Animals were set in the middle of a circular arena 40 mm in diameter (see Figure 1A and previous paragraph). Each animal was recorded for 10 s of consecutive walking to exclude effects caused by resting. To obtain probability densities, the Cartesian coordinates x and y, which were acquired through trajectory tracing, were transformed into polar coordinates with polar angle θ and radius r. We calculated the histogram of r for each fly and the median histogram for each strain, respectively. We then normalized that histogram for the surface area of each bin and normalized the resulting histogram so that its integral is 1, providing a probability density. We also analyzed the median radius position by calculating the median r of each individual fly. The statistical difference between fly lines was calculated using a two-sided Kolmogorov-Smirnov test. p-Values were corrected via the Benjamini-Hochberg false discovery rate procedure (see Statistical Analysis). circadian rhythm Circadian rhythm was analyzed using the Drosophila Activity Monitoring (Tritech Research, CircKinetics 2 ) System. Flies were placed individually in glass tubes (diameter 3 mm; length 7 cm) and sealed with a gas permeable cap on one side and standard food on the other side. The food medium was identical to the rearing medium described before. The tubes were inserted into an incubator with a 12:12-h dark/light cycle, matching that of fly breeding incubator. Crossings of the midline were detected as interruptions of an infrared light beam and were automatically counted for 7 days. The first 48 h were omitted to avoid differences in behavior due to the relocation of the animal. analysis of social Distance and group Behavior were allowed to enter a circular aluminum walking arena of 66-mm diameter through two entrances on opposite sides (see Figure 1D). The arena was illuminated from below with LEDs (Nichia Cooperation, Tokushima, Japan; #NSSW157AT-H3). Each entrance was connected to a 12-chamber rotating revolver loaded with a single fly per chamber, allowing one fly at a time to enter the arena every 90 s. The positions of individual flies were determined at a frame rate of 50 fps, and the trajectories were analyzed afterward with ivTools (see Acquisition and Analysis of Locomotion Data). To associate individual flies with a group, we used agglomerative hierarchical clustering. This algorithm uses the Euclidian distance between individual flies to determine their incorporation in a group. The agglomerative clustering runs showed a clear threshold at about 20 mm interindividual distance and, accordingly, animals being more than 20 mm away from the next fly were counted as not being part of a group. At this distance, a Drosophila fly extends the visual field of only one ommatidium (61). Only flies that gathered together for more than 30 s were counted as groups. Male-male courtship and the formation of chains of multiple males following each other were scored from the videos by eye. The latter chaining behavior was only taken into consideration when a male followed another one with its wing extended for more than 3 s (most chains were stable for several minutes). The leading animal was not considered to be actively chaining and was, therefore, excluded. An example of chaining behavior can be seen in Figure S1A in Supplementary Material. Peripheral auditory Functions To test for possible defects in hearing, we affixed the flies with wax on a focus holder (62) and then measured vibrations of their antennal sound receiver (63). Vibrations were measured at the tip of the antennal arista using a PSV-400 Laser-Doppler-Vibrometer (Polytec GmbH, Waldbronn, Germany). For acoustic stimulation, pure tones were broadcasted via a loudspeaker positioned ca. 10 cm behind the fly. The stimulus frequency was adjusted to match the individual best frequency of the fly's receiver as determined from the power spectrum of its vibrations in the absence of sound (64). Electrophysiological recordings of compound action potentials of auditory receptor neurons were performed with an etched tungsten electrode inserted between antenna and head (65). sound recordings Male courtship songs (CSs) in the presence of a decapitated, 5-to 7-day-old virgin female were recorded using a microphone (Bruel & Kjaer Type 4165) in a soundproof chamber. The recorded signals were amplified, band bass filtered (70-5,000 Hz), and directly digitized with a sampling frequency of 44,100 Hz. For acquisition, Audacity 2.0.6 3 was used. Analysis was done using custom made MatLab programs. To determine the dominant frequency components of the songs, Fast Fourier Transformation using a 4096 Hanning window was applied. competitive courtship assay with acoustic stimulation Competitive courtship assays were performed with two socially naïve males (age 7-12 days) placed together with a decapitated, 5to 7-day-old, virgin wt female in a circular arena with 2 cm diameter. The age disparity in male flies should have limited effects on the mating behavior, in this case. As the female is decapitated and mating is never successful, the attractiveness of males does not influence the female's choice as described for different male age groups (66,67). The initiation of courtship by the male does not vary much between the 7th and 14th day of age (68). The bottom of the arena consisted of a fine mesh. During the experiments, flies were exposed to either white noise (WN), aggression songs (AS), or CSs that were previously recorded from wild-type males (69). Acoustic stimuli were presented by a loudspeaker situated below the arena. Videos of the experiments were recorded for 15 min with a frame rate of 30 fps. Only the periods from 5 to 10 min after start of the experiments were considered. Frame-by-frame analysis of recorded videos was performed by an observer unaware of stimulus conditions and fly strain. For each frame either idle, unilateral wing extension as a hallmark of courtship or aggression behavior was allocated to the acting individual fly. This was done using an in-house developed software tool that allows for fast video annotation via a game pad. This system allowed for long scoring sessions and high throughput via the observers, who scored 9,000 frames per replicate. Aggressive behavior directed against the other male was recognized by aggressive acts like boxing, leg kicking (see Figure 7), and production of agonistic sound signals with both wings elevated. Courtship behavior toward the female was identified by unilateral wing extension associated with the production of CSs or clear copulation attempts with the abdomen. From the total duration of courtship (DC) and aggression (DA) we calculated a behavioral contrast (c): Positive c values indicate that the male spent more time with aggression, while negative values denote that the animal spent more time with courtship. statistical analysis To test for significant differences between experimental groups, Fisher's permutation test was applied to evaluate the differences of the medians of the respective measured variables. In some cases, we used a two-sided Kolmogorov-Simrnov test and once Fisher's exact test (instances are indicated). p-Values were always corrected using the Benjamini-Hochberg false detection rate (71) procedure by applying the Matlab implementation of David M. Groppe. 6 resUlTs Heads of 50 Dnlg2-deficient and 50 Dnlg4-deficient flies were subjected to qPCR, revealing reduced levels of the respective dlng transcripts by 27 and 40% compared to wild type (see Table S1 in Supplementary Material), respectively. We subjected these flies to various tests to assess their behaviors. To identify general defects in mobility, we first monitored their spontaneous locomotion in a plain, circular arena. The locomotion trajectories were categorized using unsupervised k-means clustering as reported in Ref. (71,72). The resulting movement categories were two "forward-sideways movements, " two "fast yaw turns, " and "resting" [see also Ref. (61)]. The distribution of these three categories showed no significant differences between wild type and the two mutant stains (Figure 2). Therefore, locomotion and its assembly 4 www.python.org. 5 www.pygame.org. 6 https://de.mathworks.com/matlabcentral/fileexchange/27418-fdr-bh. from movement components seemed to be uncompromised by mutations in dlng2 and dlng4 and mutant defects in other behaviors are unlikely to result from general locomotion impairments. The avoidance of central (open) areas of an arena is called centrophobia (73,74). In Drosophila, impaired centrophobia has been associated with defects of the mushroom bodies (55). In contrast to wild-type flies that nearly exclusively circled around the edge of the arena, both dnlg-mutants often traversed the central part of the arena (Figure 3). Both mutant strains seemed to avoid the immediate vicinity of the walls, resulting in median radial positions that are significantly closer to the center than in wt (Figure 3B, two-sided Kolmogorov-Smirnov test p > 0.01). Dnlg4-deficient flies even displayed a weak tendency for preferred occupation of central regions. A previous study demonstrated alterations of sleep in dnl-g4 LL01874/Def -mutant flies (54). We, thus, analyzed activity patterns and found Dnlg4-deficient flies have longer, but fewer episodes of sleep compared to wild-type and Dnlg2-deficient flies (see Figures 4D,E). Dnlg2-deficient flies, however, seemed to show an opposite phenotype with more but shorter sleep episodes than the wild type. Wild-type and both Dnlg-deficient fly strains displayed activity peaks associated with, or slightly preceding, the dark-to-light and light-to-dark switches ( Figure 4A). In dnlg4 LL01874/Def , these peaks lasted longer than in wild type and dnlg2 KO17 -mutants ( Figure 4A); their overall activity was significantly reduced compared to both other strains during light periods ( Figure 4B) and compared to wt during dark periods ( Figure 4C). Compared to wt, Dnlg2-deficient flies displayed a slight but not significant increase of center crossing activity during light phases ( Figure 4B) and no difference during dark periods ( Figure 4C). In summary, overall activity is reduced in dnlg4 LL01874/Def flies compared to wild-type flies (Fisher's permutation test corrected with Benjamini-Hochberg fdr; p-value 0.00015) and dnlg2 (p = 0.00008), while dnlg2 KO17 flies show a tendency for increased activity that fails significance (p = 0.09102) at least during periods of light. Previously, Dnlg2-deficient flies reportedly displayed abnormal social and mating behavior (16). In order to study altered attraction to conspecifics, we recorded the location data of 24 animals (per replicate) that were allowed to successively enter a circular arena from two opposite sides in 90-s intervals. To assess group formation, we used their positional data and subjected it to a hierarchical agglomerative clustering. While wild-type males remained single with a probability of 39%, this probability was significantly increased in Dnlg2-deficient flies (60%) and reduced to 24% in Dnlg4-deficient flies [ Figure 5C; Fisher's exact test p-value wt vs dnlg2 KO17 Figure 5A; differences in group sizes tested with a two-sided Kolmogorov-Smirnov test (wt vs dnlg2 KO17 p = 0.0198; wt vs dnlg4 LL01874/Def p = 3.2577 × 10 −4 ; dnlg2 KO17 vs dnlg4 LL01874/Def p = 2.5584 × 10 −9 ; N(wt) = 104, N(dnlg2 KO17 ) = 94, N(dnlg4 LL01874/Def ) = 119; p-values corrected after (70))]. Chaining behavior was excluded from this analysis, because this phenomenon was caused by an improper termination of courtship and not a direct effect on aggregation behavior. We also calculated the average distances of animals to their nearest neighbor within such groups. Compared to wild-type and dnlg2 KO17 -mutant males that maintained similar interindividual distances, the distance was reduced in Dnlg4-deficient flies ( Figure 5D). Hence, although interindividual distances are only changed in dnlg4 LL01874/Def -mutants, Dnlg2-deficient males have a lower tendency to form groups, while dnlg4 LL01874/Defmutants show an increased tendency to aggregate ( Figure 5B). Courting D. melanogaster produce two types of songs, a sine song and a pulse song. These acoustic communication signals play important roles in driving female mating decisions. By comparing songs between the three strains, we found that mutations in both dnlg2 KO17 and dnlg4 LL01874/Def affect the songs. The major frequency component of sine songs was quite variable in wild type, ranging between 120 and 160 Hz (Figure 6B). While the dominant sine song frequency was slightly lowered in Dnlg2-deficient flies, Dnlg4-deficient flies produced songs with higher sine song frequencies (dnlg2 KO17 vs dnlg4 LL01874/Def p = 0.00625; Fisher's permutation test, corrected by Benjamini-Hochberg fdr). Analysis of courtship pulse songs revealed no differences in amplitude and shape (number of oscillations per pulse) between wild-type males and the two dnlg-mutants, suggesting that the neuromuscular components that generate acoustic communication signals were not compromised by the mutations in Dnlgs. Interpulse intervals had median durations of 38 ms in wild types and 37 ms in dnlg4 LL01874/Def but were significantly longer in CSs of dnlg2 KO17 -mutants ( Figure 6C). We, next, analyzed male-chaining behavior, whereby males follow each other with one extended wing (75). The probability of wild-type males to engage in male-directed courtship was generally low and remained below 5% even when the arena was filled with larger numbers of individuals (Figures 6D,E). Chaining, however, was entirely absent in Dnlg2-deficient flies and thereby significantly lower compared to wt (p-value > 0.001; Kolmogorv-Smirnov). In contrast, dnlg4-mutants formed courtship chains that included up to 17 animals, and the probability of individual flies to engage in chaining increased significantly (p-value > 0.001; Kolmogorv-Smirnov), reaching more than 60% as the number of animals in the arena was increasing (Figure 6D). In summary, absence of Dnlg2 and Dnlg4 not only affects male CSs, but also chain formation, a male-directed courtship behavior. Hahn et al. (16) reported reduced social interactions in Dnlg2deficient flies in competitive courtship assays where two males switched between male-directed agonistic and female-directed behavior. A recent study (76) further reported that AS promote aggression, while CSs inhibit aggressive interactions between Drosophila males. In order to test whether Dnlg2-and Dnlg4deficient flies react appropriately to sound signals, we extended Figure 6A). During stimulation with WN and CSs wt males spent more time courting the female than displaying aggression against the other male (see Figure 7). During stimulation with aggression sounds, the latter male-directed aggression was increased [ Figure 7A; Fisher's exact test corrected with Benjamini-Hochberg fdr; p-value wt(white noise) vs wt(aggression sounds) = 0.0237]. Neither Dnlg2-or Dnlg4-deficient males altered the frequencies of aggressive and courtship behavior during stimulation with aggressive sounds. Stimulation with CSs slightly increased wt courtship behavior, whereas the opposite effect was seen in Dnlg2-and Dnlg4-deficient males, which reduced their courtship significantly and increased aggression (wt vs dnlg2 KO17 p = 0.0237; wt vs dnlg4 LL01874/Def p = 0.0237). Hence, both Dnlg2-and Dnlg4deficient flies seem to fail to respond appropriately to sounds. DiscUssiOn The presented behavioral data suggest that the trans-synaptic adhesion molecules Dnlg2 and Dnlg4 may play a prominent role in the neuronal regulation of Drosophila's social interactions. dnlg2 and dnlg4 are both expressed at central nervous synapses, but Dnlg2 is also present at neuromuscular synapses (53). Of the other two Drosophila Nlgs, dnlg1 is expressed at neuromuscular postsynapses (77) and dnlg3 is expressed in neuromuscular junctions and the central nervous system (78). Similar to Nlgs in mammalian central nervous systems, Dnlg1, Dnlg2, and Dnlg3 seem important for synaptic maturation and functional maintenance (studied at the neuromuscular junction) rather than being crucial for synaptogenesis. In order to assess the requirements of Nlgs for the proper functionality of neural circuits, we analyzed the effects of mutations in dlng2 and dlng4 on walking, hearing, sound production, and social behavior. Electrophysiological recordings from antennal auditory nerves detected no differences in auditory sensitivity between wild type and mutant flies. Intact chemosensation can also be assumed, since males of both mutants correctly addressed females with courtship and males with agonistic behaviors in competitive courtship assays. In Drosophila, sex recognition and the assessment of female reproductive state has been demonstrated to largely depend on the detection of sex-and statespecific surface hydrocarbons (79,80). In addition, dnlg2 KO17 -and dnlg4 LL01874/Def -mutants, like wild-type flies, maintained peaks of locomotor activity when lights were switched on or off [ (16,54), this study]. Nonetheless, the sleep rhythms of Dnlg2-and Dnlg4deficient flies were altered in an opposing manner, with Dnlg4deficient flies sleeping more often and shorter than wt, consistent with previous studies (54), and Dnlg2-deficient flies sleeping less often with longer duration (Figure 4). Notwithstanding seemingly normal sensory functions, both dnlg2 KO17 -and dnlg4 LL01874/Def -mutants thus show opposing deficits in sleep. Unlike other insect species, such as locusts and cockroaches, that contain both excitatory glutamatergic and inhibitory GABAergic motor neurons, Drosophila only possesses excitatory neuromuscular synapses (81). Synaptic expression of dnlgs 1, 2, and 3 and consequences for synaptic transmission resulting from (83) demonstrated that initiation of sine and pulse song is triggered by descending brain neurons, whose activity synchronizes the intrinsic activity of thoracic pattern generators to a faster central clock. Avoidance of open areas is regarded as a measure for anxiety in various animal models (84)(85)(86)(87). Wild-type Drosophila exhibit an obvious centrophobism that critically depends on the functionality of the mushroom bodies (55). While total ablation of mushroom bodies reduced centrophobism, specific inactivation of mushroom body γ-lobes increased centrophobistic behavior (55). Both, dnlg-mutant strains used in this study showed reduced centrophobic behavior (Figure 3) and dnlg2 is expressed in mushroom bodies along with dlng4 (unpublished immunocytochemical data by W. Xie), in the central complex (54). Both mushroom bodies and central complex are involved in the regulation of higher order social behaviors (55)(56)(57)(58)(59), so expression patterns seem consistent with the observed behavioral defects. During an unsupervised and data-derived group formation analysis, 20 mm emerged as the distance threshold for group interactions. This is nearly identical to the distance at which a conspecific fly is only detected by a single ommatidium in the complex eye of Drosophila (61,88,89). Even though courtship and potentially other social contexts depend also on other sensory modalities [e.g., olfaction (90)], vision seems to be the most accurate and direct sense to judge the interindividual distance. Therefore, it is not surprising that group formation might be limited by the visual acuity of Drosophila. Aggregation of individuals and the formation of groups is a basic feature of social interaction, as for example oviposition in female Drosophila is dependent on group size (91). Compared to wild-type males, dnlg2 KO17 -mutants displayed a reduced tendency to form groups but those who accumulated in groups assumed similar minimal interindividual distances (Figure 5). In contrast, dnlg4 LL01874/Defmutants had a lower tendency to remain single, formed larger groups, and assumed closer positions to other group members. Furthermore, this closer interindividual distance might have led to an increased formation of courtship chains in Dnlg4-deficient flies (Figure 6). Dnlg2-deficient flies showed a significantly lower chance of male-male courtship, which might be caused by their larger interindividual distance (Figure 6). It has been shown that CSs stimulate the formation of courtship chains (92,93) and especially the inter-pulse interval of CSs has been identified as a critical factor for species recognition and attractiveness (20,94), an alteration in song production might also lead to the chaining phenotype. Since dnlg2 KO17 -mutant males produced songs with significantly prolonged inter-pulse intervals (Figure 6C), complete absence of chaining behavior could also have been a consequence of less attractive songs. Thus, the chaining phenotypes might be epiphenomena of the altered interindividual distance and CS production. Absence of Dnlg2 and Dnlg4 altered social interactions of Drosophila males, without causing obvious impairments of sensory functions and execution of movements [this study, (16)]. Deficiency of Dnlg2 and Dnlg4, which seem to be differentially expressed at excitatory and inhibitory synapses, induced opposing deviations from wild-type behaviors in some behavioral paradigms, such as sleep rhythm, male chaining, group size, and interindividual distance. Other behavioral paradigms, such as center avoidance and stimulation with wt CSs, revealed equally altered behavior in a non-opposing fashion. Thus, Dnlg2 and Dnlg4 may play different roles in the regulation of synaptic transmission within brain neuropils implicated in the social behavior of D. melanogaster. Like fly Nlgs, mammalian ones are differentially expressed at different types of synapses, and deletion or overexpression of particular Nlgs resulted in altered proportions of excitatory vs inhibitory transmission in brain neuropils (28,51). Both overrepresentation of excitation and overrepresentation of inhibition have been associated with ASD phenotypes (48) and have also been observed in mouse models of ASD (95)(96)(97). Targeted manipulation of Drosophila Dnlg2 and Dnlg4 functions in specific brain regions might help to identify the neural circuits that regulate social behaviors and to assess the role of Nlgs in the balance between neuronal excitation and inhibition. aUThOr cOnTriBUTiOns All authors designed the study. KC, AH, RK, IG, NH, RH, and BG collected and analyzed the data, HG and BG designed the behavioral setups and data acquisition. RH and BG wrote the manuscript, and all authors edited and approved the manuscript. KC and AH contributed equally.
2017-07-14T19:08:35.402Z
2017-07-10T00:00:00.000
{ "year": 2017, "sha1": "e07540e8fa45505f6e05634f8318fed8202b7bec", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyt.2017.00113/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "92a93bd456283497d0bf8c4aba5f31b21179eec6", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
221091362
pes2o/s2orc
v3-fos-license
ABCA3 deficiency from birth to adulthood presenting as paediatric interstitial lung disease Abstract Paediatric disorders of pulmonary surfactant may occur due to mutations involving surfactant proteins B and C, and ATP‐binding cassette subfamily A member 3 (ABCA3) genes. Recessive frameshift or nonsense ABCA3 mutations are associated with respiratory failure and neonatal death but milder phenotypes of ABCA3 deficiency due to missense, splice site, and insertion/deletions may result in survival beyond infancy. To date, only one case report describes the clinical course from birth to age 21 years and there are less than 10 adult cases. No guidelines exist for medical therapy due to the rarity of this condition. We describe the clinical course of a patient over 39 years and her younger brother who were both diagnosed at birth with an unspecified paediatric interstitial lung disease (ILD) and were eventually diagnosed with ABCA3 mutation in their adulthood. Our report highlights the minimal progression of the ABCA3‐related ILD without long‐term medications, but the development of dyspnoea due to progressive pulmonary hypertension and airflow obstruction. Introduction Pulmonary surfactant consists of a mixture of proteins and lipids which are produced by type II alveolar epithelial cells and are essential in reducing surface tension at the airliquid interface within the lungs. Genetic disorders of surfactant production include surfactant protein B, surfactant protein C, and the ATP-binding cassette subfamily A member 3 (ABCA3) [1]. ABCA3 surfactant protein disorder is an autosomal recessive condition resulting in loss of function of the phospholipid transporter involved with pulmonary surfactant [2]. More specifically, ABCA3 transports surfactant phospholipids into specialized secretory organelles known as lamellar bodies [3]. Although paediatric interstitial lung disease (ILD) due to mutations in the ABCA3 gene has been increasingly recognized [4], there is very little information about this condition and its clinical course from birth to adulthood in milder forms of this disease [5]. Case Report A 25-year-old woman was referred for review of a longstanding history of exertional dyspnoea since infancy. She had a history of recurrent lower respiratory tract infections in her first year of life, associated with respiratory distress and diffuse interstitial changes. There was no family history of respiratory illness and both parents were healthy. On each admission, she was administered oxygen and antibiotics for presumed aspiration pneumonia. At five months, she was readmitted to a paediatric unit; multiple investigations were performed, including fibre-optic bronchoscopy, immunoglobulins, sputum cultures, sweat electrolytes, and milk precipitins, which were all unremarkable. A radionuclide gastro-oesophagram (milk scan) only revealed moderate-to-gross reflux in the prone position without evidence of pulmonary aspiration. An open lung biopsy was obtained at 11 months of age which demonstrated desquamated pneumocytes and a few foam cells (Fig. 1A) as well as fibrous thickening of alveolar septa and epithelialization of lining cells (Fig. 1B) with immunohistochemistry for CD68 highlighting prominent intra-alveolar macrophages (Fig. 1C) and cytokeratin showing prominent enlarged pneumocytes (Fig. 1D) leading to a histopathological conclusion of pulmonary interstitial fibrosis of uncertain aetiology. She was treated with daily oral prednisolone from 11 months to four years of age at a starting dose of 2 mg/kg/day. When she was five years old, her younger brother was born with similar but milder clinical features and did not require hospitalization during his childhood. However, he was also treated with daily oral prednisone from five to seven years of age. Her quality of life appeared near normal apart from some exercise limitation in her adolescent years. At age 25 years, she was seen by an adult respiratory physician having had a diagnosis of a paediatric ILD. Physical examination revealed finger clubbing and bilateral lower zone fine inspiratory crackles on auscultation. An arterial blood gas sample on room air demonstrated moderate hypoxaemia with partial pressure of oxygen (PaO 2 ) 61 mmHg, partial pressure of carbon dioxide (PaCO 2 ) 33 mmHg, and arterial oxygen saturation (SaO 2 ) 95%. In view of her unusual history and clinical features, a surfactant protein deficiency disorder was suspected. DNA from both siblings was extracted from whole blood and sent to the Department of Pediatrics Research Laboratory at Johns Hopkins University School of Medicine which revealed compound heterozygosity for a known ABCA3 mutation (c.3609_11delCTT; F1203del) and a second, previously unknown, missense mutation in ABCA3 exon 5 (c.127 C>T; p.R43C) [4]. Genetic testing of parents showed that her mother was heterozygous for the F1203del mutation and her father was heterozygous for the p.R43C mutation. Her serial pulmonary function tests demonstrated an obstructive pattern with forced expiratory volume in 1 sec (FEV 1 ) gradually decreasing from 78% predicted at age 10.5 years to 52% predicted at age 39 years, but forced vital capacity (FVC) remained within the normal range at 93% predicted at age 39 years. Diffusing capacity was severely reduced below 25% predicted from age 10.5 years and did not change over time (Fig. 2). Plethysmographic lung volumes did not reveal any lung restriction. A high-resolution chest computed tomography (HRCT) scan at age 28 years showed severe diffuse lung disease with extensive distribution of cystic change throughout the lungs (Fig. 3A). In the lower lobes there was extensive vascular attenuation, and at the bases were multiple well-defined cysts up to 20 mm size (Fig. 3B). Comparison to an HRCT chest three years prior showed no interval change. Transthoracic echocardiography at age 27 years showed an elevated pulmonary pressure at 55 mmHg with mild right ventricular and right atrial enlargement but normal contractility. Her brother had a similar obstructive pattern on serial lung function testing but had developed more airflow obstruction over time in conjunction with a higher residual volume and diffusing capacity (Fig. 2). His HRCT chest at age 30.5 years also demonstrated extensive thin walled cysts measuring from a few millimetres to 60 mm. There were areas of reticulation and parenchymal distortion consistent with fibrosis with air trapping on expiratory series (Fig. 3C, D). With genetic counselling and in view of the autosomal recessive nature of ABCA3 deficiency, the patient sought pregnancy at age 27 years, despite her cardiorespiratory disease. She was managed in a high-risk obstetrics unit from 30 weeks gestation, and given inhaled iloprost two weeks pre-partum and immediately post-partum. Elective lower segment caesarean section delivered a healthy male baby without any lung disease. She was managed conservatively with supplemental oxygen after she developed resting hypoxaemia from age 27 years. No specific drug therapy for ABCA3 deficiency was trialled and pulmonary hypertension medications were unable to be continued due to patient-perceived side effects. She developed right heart failure at age 39 years and was subsequently referred for consideration of a lung transplant. Her brother was also diagnosed with There was a trend towards increasing residual volume (RV) and functional residual capacity (FRC) in both as airflow obstruction increased in adulthood. Diffusing capacity was severely reduced and remained unchanged. pulmonary hypertension and had been managed in a pulmonary hypertension clinic since age 28 years. He has remained on room air and is maintained on tadalafil and macitentan as well as fluticasone, umeclidinium, and vilanterol inhaler. Discussion ABCA3 mutations in adults with ILD are extremely rare with a lung registry study identifying only three adult cases [6], and a recent literature review finding nine adult cases with only one of the cases being diagnosed at birth and followed up through adulthood [5]. Our report demonstrates the interesting observation of ABCA3 mutations in siblings resulting in cystic lung disease, fibrosis, and airflow obstruction with a gradual decrease in FEV 1 over 30 years, but otherwise essentially stable pulmonary disease and lung function over 39 years into adolescence and adulthood. Her eventual declining clinical status was due to progression of pulmonary hypertension rather than her ILD. This observation is consistent with findings in children where mean lung function was low but tended to remain unchanged [7]. Although both siblings only tested positive for one known mutation (F1203del), the other abnormality consisted of a missense mutation in exon 5 (p.R43C, a substitution of cysteine for arginine) which was not recognized in ABCA3 deficiency at the time of testing of these siblings [4]. However, compound heterozygotes with mutations in the same codon (p.R43L and p. R43H) have been previously noted, and two other unrelated ABCA3 deficiency cases involving p.R43C missense mutations have been subsequently documented [4], making it likely that the p.R43C mutation was responsible for the clinical phenotype. Even with the same genetic mutations within the siblings, there is some clinical variability with the sister having less airflow obstruction but more severe pulmonary hypertension, and the brother having greater airflow obstruction, more extensive cysts, and gas trapping. Although recessive frameshift or nonsense ABCA3 mutations are associated with respiratory failure and neonatal death [2,4], milder forms of ABCA3 deficiency exist which may be associated with survival beyond infancy [8]; these milder phenotype may occur with missense, splice site, and insertion/deletion ABCA3 mutations [4]. The presence of multiple parenchymal cysts may be a feature of this condition [6] and was previously noted in five out of nine children in a case series [7] and also in adults [5]. Lung histopathology may include a variety of presentations including pulmonary alveolar proteinosis, desquamative interstitial pneumonitis, and non-specific interstitial pneumonitis, but the characteristic feature of ABCA3 mutations is dense formation of lamellar bodies on electron microscopy [7]. Definitive diagnosis is via genetic testing for ABCA3 mutations, although a probable diagnosis of a genetic basis may be made on lung biopsies with immunohistochemistry and electron microscopy [1]. There are currently no guidelines for drug therapy to treat patients with ILD due to ABCA3 mutations, although prednisone, azathioprine, azithromycin, whole lung lavage, and prednisone with hydroxychloroquine have been trialled [5]. It is likely that lung transplantation is the only "curative" procedure available; however, transplant data are limited to infants and children with only one adult case who underwent lung transplantation at age 21 years [5]. Outcomes from childhood transplantation for genetic disorders of surfactant metabolism including ABCA3 deficiency remain poor due to growth impairment, bronchiolitis obliterans, and other transplant-associated morbidities [9]. In conclusion, an ABCA3 mutation may be considered in an adult if there are findings of diffuse pulmonary cysts and pulmonary fibrosis with a neonatal or childhood history of respiratory distress. Airflow obstruction and pulmonary hypertension may progress in adulthood, contributing to hypoxaemia and right heart failure. If a rare surfactant protein disorder is suspected, genetic testing and advice from a specialized unit are recommended. Disclosure Statement Appropriate written informed consent was obtained for publication of this case report and accompanying images.
2020-08-11T05:04:44.698Z
2020-08-07T00:00:00.000
{ "year": 2020, "sha1": "02ba41702fc2ef3349c6b807623a482fb3d4c6bc", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/rcr2.633", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "02ba41702fc2ef3349c6b807623a482fb3d4c6bc", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
214122636
pes2o/s2orc
v3-fos-license
Evaluation of Different Fungi Toxicants against Yellow Rust Diseases on Bread Wheat (Triticum aestivum L.) in the Cold Arid Zone of Kargil, Ladakh (J&K), India Bread wheat (Triticum aestivum L.) is one of the most widely grown and most consumed food crops all over the world. It is the second most important cereals crop after rice and it contributing substantially to the national food security by providing more than 50% of calories to the people. Wheat (Triticum aestivum L.) is a staple food of billions of people in the world, used to make flour for leavened, flat and steamed bread, cookies, International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Volume 8 Number 11 (2019) Journal homepage: http://www.ijcmas.com Introduction Bread wheat (Triticum aestivum L.) is one of the most widely grown and most consumed food crops all over the world. It is the second most important cereals crop after rice and it contributing substantially to the national food security by providing more than 50% of calories to the people. Wheat (Triticum aestivum L.) is a staple food of billions of people in the world, used to make flour for leavened, flat and steamed bread, cookies, ISSN: 2319-7706 Volume 8 Number 11 (2019) Journal homepage: http://www.ijcmas.com A field experiment was conducted to determine the efficacy of different triazole fungicides against yellow rust of wheat caused by Puccina striiformis. The experiment field was divided in three part one part sprayed with propoconizole. Five fungicide (propiconazole 10EC, tebuconazole 25 EW, difenconazole 25 EC, hexaconazole 5EC andazoxystrobulin+ difenconazole were used in the experiment. All the fungicide treatment significantly reduced the disease intensity by 6.1% to 9.5 % as compared to control (35.5%). The minimum disease intensity (6.1%) was recorded in treatment where propoconizole before disease appearance and ozoxystrobulin +difenaconazole after disease appearance was sprayed fallowed by propiconazole and tebuconazole with disease intensity of 7.8 %. Whereas 8.2 percent disease intensity was recorded in treatment where propoconizole was again sprayed after disease appearance followed by hexaconazole (9.3%). Whereas, maximum disease intensity was recorded where propoconzole and hexaconazole was sprayed. cakes, pasta, noodles, beer and alcohol (Habib and Khan, 2003). Annually, wheat is produced on 224.53 million hectares of land and 672.2 million metric tons of wheat is produced in the world (FAO, 2013). According to this report the world average wheat production is 2.99 tons /ha. In Kargil District of Ladakh (J&K), wheat is the second most important crop after barley. However, the production and productivity of wheat is curtailed by various biotic and abiotic factors. Among the biotic factors, yellow rust disease is the most threaten and wheat production bottlenecks. Many part of District, particularly the cooler area, is now becoming hot spot for wheat rusts, where the periodic epidemics cause significant yield losses and reduction. Therefore, the present study was undertaken to check the best fungicide for controlling of yellow rust under cold arid zone of Kargil, Ladakh. Materials and Methods Bread wheat cultivar Krokar (Local), highly susceptible to yellow rust (Puccinia striiformis f.sp. triticiW) disease was planted at yellow rust hot spot location; Stikchey (District Kargil) in plots of 2800m 2 , during the year 2017 and 2018 and the experiment was laid in randomized block design (RBD) with three replications for each treatment. Seed rate of 240kg/ha was sown in 2 nd week of April. Test fungicides, Propiconazole 25 EC(0.1%), Tebuconazole 10 EC (0.1%), Difenconazole (0.05%), Azoxystrobulin + Difenconazole (0.05%), Hexaconazole 10EC (0.05%), Mancozeb+carbendazim (0.25%), Chlorothalonil (0.3%) and Mancozeb (0.3%), were used. The plot was first divided equally (700m 2 ) into four parts and sprayed one part with sterile water and another three with Propocanzole, Chlorothanil and Mancozeb before the appearance of yellow rust respectively. Second fungicide were applied at 5% severity level of yellow rust (booting crop growth stage) and remaining 300m 2 was kept as check. Test and check fungicides were applied manually using Knapsack sprayer delivering 250 liter of water/ha. Rust severity was recorded in percentage using modified Cobb Scale (Peterson et al., 1948). Results and Discussion In 2015-16 main cropping season yellow rust disease pressure was very high and excellent disease epidemics was developed to the level of creating significant difference among all experiment plots. Fungicide spray treatments significantly reduced the disease over control. There was statistically significant difference between the test fungicides. In treatment T 11 (where Propiconazole @0.1% and Difenconazole @ 0.05% were sprayed) significantly reduced the disease intensity and incidence by 6.14% and 19.03 % respectively fallowed by treatment T 12 and T 9 by 7.59 and 8.77 respectively. The treatment T 10 and T 13 , was not significantly difference. However, from visual field observation all the test fungicides showed comparable level of efficacy in controlling disease as compared to unsprayed plot. It is witnessed from the Table 1 that treatment T 2 , T 4 , T 7 , T 14 and T 21 are at par with each other. Similarly there is no statistical difference among T 3 , T 5 and T 24 . T 18 and T 23 are also at par with each other. It is evident from Table 2 that T 17 , T 20 , T 26 and T 27 showed comparable level of efficacy in reduction of rust severity and incidence. Whereas, T 8 , T 15 , T 22 and T 28 showed disease intensity ranged from 29.17% to 33.16 % compared to the plot where only sterile water sprayed and control where disease intensity showed 39.17 and 41.09 respectively. Alemu and Mideksa (2016) also reported that propiconazole significantly increased winter wheat yield by 77%. Ransom and McMullen (2008) showed that tebuconazole and propaconazole improved yield by 5.5 to 44.0%.
2019-12-12T10:50:02.303Z
2019-11-20T00:00:00.000
{ "year": 2019, "sha1": "0e1a86fc158ba0a4c54e6a63d7899df32c38bf44", "oa_license": null, "oa_url": "https://www.ijcmas.com/8-11-2019/Nassreen%20F.%20Kacho,%20et%20al.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "77368962a3e24d45a99cd92ca9122a17497e6185", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
55226411
pes2o/s2orc
v3-fos-license
Testing consumer theory: evidence from a natural field experiment We present evidence from a natural field experiment designed to shed light on whether individual behavior is consistent with a neoclassical model of utility maximization subject to budget constraints. We do this through the lens of a field experiment on charitable giving. We find that the behavior of at least 80% of individuals, on both the extensive and intensive margins, can be rationalized within a standard neoclassical choice model in which individuals have preferences, defined over own consumption and their contribution towards the charitable good, satisfying the axioms of revealed preference. Electronic supplementary material The online version of this article (10.1007/s40881-017-0040-3) contains supplementary material, which is available to authorized users. Introduction Neoclassical theory provides a rich set of testable implications for how consumer demand responds to changes in relative prices and income. This paper presents evidence from the first large-scale natural field experiment shedding light on whether individual behavior is consistent with the predictions of revealed preference theory within a standard model of utility maximization subject to budget constraints (e.g., Afriat 1967). We do this through the lens of a natural field experiment on charitable giving. By focusing our analysis on the choice between a charitable good and private consumption, we vary the budget set individuals face in a straightforward and natural way, holding all other prices constant. We do so by offering various matching schemes that affect how donations given for the charitable good translate into donations received by the project. Specifically, we induce-(i) large changes in the relative price of the charitable good through rates at which donations are matched; (ii) pure income transfers to individuals through a matching scheme that guarantees any positive donation is matched by some fixed amount; (iii) a nonconvex budget set in which only donations above some threshold are matched. In our design, the induced budget sets intersect each other, opening up the possibility to directly test the predictions of revealed preference theory. For such research questions, a between-subject research design is strictly preferred to a within-subject design. This is because within-subject designs inevitably require the same individual to be presented with different budget sets at different moments in time. This raises the concern that there are natural changes over time in incomes, relative prices, asset holdings, or labor supplies that confound any inference that can be made on whether individual preferences satisfy the axioms of revealed preference. Our main result is that on both the extensive and intensive margins of charitable giving, individual choices can be rationalized within a standard model of consumers maximizing utility subject to budget constraints, where individual preferences are defined over own consumption and charitable donations received by the project. The behavior of at least 80% of recipients who make some positive contribution is in line with their preferences satisfying GARP. In short, in a real-world environment where participants make simple decisions they are familiar with, the predictions of microeconomic theory work well in explaining individual behavior. We highlight that field experiments can be used to test revealed preference theory and such approaches are complementary to non-experimental tests of consumer theory which typically exploit panel data on consumer purchases. However, as in within-subject experimental designs, in non-experimental data apparent violations of revealed preference might instead be due to changes in tastes, changes in the holding of durables, or the storage of consumables and consumption expenditures are typically measured with error. Consumer panels also typically suffer from observed price changes being both relatively small, and not necessarily implying an intersection of budget sets. Hence, in contrast to our research design, tests of revealed preference based on non-experimental data are likely to have low power (Varian 1982;Bronars 1995). Such approaches have provided mixed results with some studies rejecting behavior consistent with GARP (Mossin 1972;Hardle et al. 1991) and others finding more rationalizable patterns of consumption (Manser andMcdonald 1988, Famulari 1995). Methodological advances using non-parametric techniques suggest that consumer behavior does not reject GARP in the long run for most income groups (Blundell et al. 2003). Our analysis also builds on laboratory evidence on consumer choice, which has provided mixed evidence on whether individual behavior is consistent with GARP (Battalio et al. 1973;Cox 1997;Sippel 1997;Andreoni and Miller 2002;Choi et al. 2007;List and Lucking-Reiley 2002). Our research design combines the key advantages of laboratory experiments in being able to experimentally manipulate the economic environment faced by agents with the advantages of a field study using real-world data on a large population. As suggested by Varian (2006), this research design is, perhaps, the best possible that could be used to test whether individual behavior is consistent with revealed preference theory. 1;2 2 The natural field experiment Design In June 2006, the Bavarian State Opera organized a mail out of letters to over 25,000 individuals designed to elicit donations for a social youth project which the opera was engaged in. The project's beneficiaries are children from disadvantaged families whose parents are almost surely not among the recipients of the mail out. As it is not one large event that donations are sought for, but rather a series of several smaller events, it is clear to potential donors that additional money raised can fund additional activity. In other words, the marginal contribution will always make a difference to the project. Individuals were randomly assigned to one of five treatments that varied in how individual donations would be matched by an anonymous lead donor. The format and wording of the mail out is provided in the Appendix. The mail out letters were identical in all treatments with the exception of one paragraph. Since the presence of a lead donor may serve as a signal of project quality (Vesterlund 2003;Andreoni 2006), it is essential that the lead donor is also mentioned in a baseline treatment. Hence in the control treatment T1, recipients were informed that the project had already garnered a lead gift of €60,000, but there was no offer to match donations. The wording of the key paragraph read as follows: T1 (control): a generous donor who prefers not to be named has already been enlisted. He will support ''Stück für Stück'' with €60,000. Unfortunately, this is not enough to fund the project completely which is why I would be glad if you were to support the project with your donation. T2 (50% matching): a generous donor who prefers not to be named has already been enlisted. He will support ''Stück für Stück'' with up to €60,000 by donating, 1 Our results differ from some of the laboratory evidence on consumer choice, such as Battalio et al. (1973) and Sippel (1997) who find behavior not to be in line with GARP. This may be because, in our study, consumers are faced with a real-life setting and make simple decisions which they are familiar with, and we exploit a large sample of individuals. 2 Our analysis here focuses on the broad question of whether individual behavior is consistent with neoclassical microeconomic theory. In companion papers, we exploit the natural field experiment to shed light on specific issues relating to the economics of charitable giving (Huck and Rasul 2011;Huck et al. 2015). Testing consumer theory… 91 for each Euro that we receive within the next 4 weeks, another 50 Euro cent. In light of this unique opportunity, I would be glad if you were to support the project with your donation. T3 (100% matching): a generous donor who prefers not to be named has already been enlisted. He will support ''Stück für Stück'' with up to €60,000 by donating, for each donation that we receive within the next 4 weeks, the same amount himself. In light of this unique opportunity, I would be glad if you were to support the project with your donation. T4 (non-convex): a generous donor who prefers not to be named has already been enlisted. He will support ''Stück für Stück'' with up to €60,000 by donating, for each donation above €50 that we receive within the next four weeks, the same amount himself. In light of this unique opportunity, I would be glad if you were to support the project with your donation. T5 (income): a generous donor who prefers not to be named has already been enlisted. He will support ''Stück fü r Stück'' with up to €60,000 by donating, for each donation that we receive within the next 4 weeks regardless of the donation amount, another €20. In light of this unique opportunity, I would be glad if you were to support the project with your donation. Notice how T4 and T5 generate budget constraints that overlap and cross with others thus generating revealed preference predictions. Conceptual framework We assume that potential donors have preferences defined over two dimensionstheir own consumption, c, and the marginal benefit their donation provide, d r . In our setting, we then have two goods-donations received by the project, and a composite good representing all other consumption. We denote the price and goods vectors as p and x, respectively. As in the exposition of Varian (2006), we then have the following definitions. Definition (revealed preference) Given some vector of prices and chosen bundles (p t ; x t ) for t ¼ 1; . . .; T, x t is directly revealed preferred to x if p t x t ! p t x. x t is indirectly revealed preferred to x if there is some sequence r; s; t; . . .; u; v, Definition (generalized axiom of revealed preference) The data (p t ; x t ) satisfy the generalized axiom of revealed preference (GARP) if x t is (directly or indirectly) revealed preferred to x s implies that p s x s p s x t . In two dimensions as in our setting, the Weak and Generalized Axioms of Revealed Preference are equivalent. The main result in the revealed preference literature is from Afriat (1967) which states that given some choice data (p t ; x t ) for t ¼ 1; . . .; T; the following conditions are equivalent: (i) the data satisfy GARP; (ii) there exists a non-satiated, continuous, monotone, and concave utility function, uðxÞ that rationalizes the data. In our setting, this corresponds to individual behavior being rationalized by the following utility maximization problem: where uðc; d r Þ has the properties listed above, the first constraint ensures consumption can be no greater than income net of any donation given, y À d g , the second constraint requires consumption and donations given to be non-negative, and the third constraint denotes the matching scheme that translates donations given into those received by the opera house. Figure 1 graphs the budget sets induced by the five treatments in ðy À d g ; d r Þspace. As the budget sets across treatments intersect, pairwise comparisons of the behavior of individuals in any two treatments allow us to test whether consumer behavior is, on average, consistent with GARP. However, although behavior, on average, might be consistent, each individual's preferences may violate GARP. We, therefore, exploit the random assignment of recipients to treatments to test for individual violations of GARP. 3 Descriptives 3.1 Treatment assignment, and extensive and intensive margin outcomes Table 1 summarizes information on individuals in each treatment and reports the p values on the null hypothesis that the mean characteristic of individuals in the treatment group is the same as in the control group T1. There are no significant differences along any dimension between recipients in each treatments. Table 2 provides descriptive evidence on behavior on the intensive and extensive margins of charitable giving by treatment. For each statistic, we report its mean, its standard error in parentheses, and whether it is significantly different from that in the control treatment. Figure 1 provides a graphical representation of the outcomes across treatments, showing for each treatment t the average bundle chosen, x t , at the relevant price vector, p t . In our sample of 18,725 individual recipients, Columns 1-3 reveal that overall, 780 individuals donated a total of €75,350, corresponding to €116,489 raised for the project, with a mean donation given of €96.6. On the extensive margin of giving, Column 4 shows that response rates vary from 3.5 to 4.7% across treatments, which are almost double those in comparable large-scale natural field experiments on charitable giving (Eckel and Grossman 2008;Karlan and List 2007). Indeed, a rule of thumb used by charitable organizations is to expect response rates to mail solicitations of between .5 and 2.5% (De Oliveira et al. 2011). On the relative price of giving we note that despite there being large variations in the budget sets in treatments T1-T3, there are no statistically significant differences in response rates across these treatments. On the intensive margin, Column 5 shows that in the control treatment T1, the average donation given is €132. As the relative Fig. 1 The Design of the Field Experiment and Outcomes by Treatment. Notes: This figure graphs the budget sets induced by the five treatments in (y The average in each treatment is marked by a dot on a budget line, and the donation received is marked at the horizontal axis, while the donation given is marked at the vertical axis. RR is the response rate in each treatment price of donations received falls in treatments T2 and T3, the average donation received increases to €151 in T2 with a 50% match rate, and to €185 in T3 with a 100% match rate. As shown in Fig. 1 and Column 7 of Table 2, as the match rate increases, the average donation given, d g , falls from €132 in the control treatment T1 to €101 in T2 with a 50% match rate, and to €92.3 in T3 with a 100% match rate. Treatment T4 induces recipients to face a non-convex budget set. For donations below €50, the budget line is coincident with that of the control treatment T1, for donations at or above €50, it coincides with that of the 100% matching treatment T3. Figure 1 shows that average outcome in terms of donations given and received in T4 replicate almost exactly those in the 100% matching treatment T3-the average donation received in T4 is €194, as opposed to €185 in T3, and the average donation given is €97.9, as opposed to €92.3 in T3. To see why this is so, note that in the control treatment, the average donation received is €132. This suggests the portion of the budget line in T4 that lies to the left of €100 on the x-axis of donations received is irrelevant for many recipients. In essence, treatments T3 and T4 present the average recipient with an almost identical choice. Hence, response rates and donations should not differ markedly between the two. Treatment T5-that causes a parallel shift out of the budget set conditional on any positive donation-should induce the largest change in the number of donors relative to the control group, because any individual with preferences, such that MRS c;d r d r ¼0 \0 will find it optimal to donate some amount in T5, whereas this is not the case in other treatments. The response rate is, indeed, significantly higher in T5 relative to the other treatments. However, it is still only 4.7%, highlighting that even among this targeted population, 95% of individuals do not care for the project. Comparing the income treatment T5 to the control treatment, consumer theory suggests that these additional donors should be willing to contribute relatively small amounts to the project which is strongly supported in the data. Testing revealed preference theory 4.1 Aggregate violations As the budget sets in treatments T1 to T5 intersect or overlap, as shown in Fig. 1, pairwise comparisons of the average behavior of individuals in any two treatments lead to tests of whether behavior is consistent with revealed preference theory. These tests are of three types: (i) the proportion of recipients that should donate some positive amount; (ii) the proportion of recipients that lie above or below some critical threshold, which is typically where the two budget lines intersect; and (iii) the distribution of donations given and received. An example of the first type of test is given by comparing treatments T1 and T3. As shown in Fig. 1, the budget set expands moving from T1 to T3. Assuming that individual preferences are well behaved, the proportion of individuals that find it optimal to provide some positive donation under T3 should be at least as great as the proportion that respond under T1. An example of the second type of test is given by comparing treatments T2 and T5 in which the budget sets cross at donations given equal to €40. For all donations given greater than €40, the budget set expands under T2 relative to T5. Hence, revealed preference arguments imply the proportion of donations given that are at least €40 should be weakly higher in T2 than T5. An example of the third type of test is given by comparing treatments T3 and T4. As shown in Fig. 1, the budget sets are coincident for donations given that are more than €50. Hence, the distribution of donations given conditional on them being more than €50, should be identical in both treatments. This follows from the fact that any donors that contribute strictly more than €50 under T3 should, by revealed preference, also contribute the same under T4. Table 3 presents the results for each pairwise treatment comparison. Columns (1)- (3) give the hypotheses to be tested of the type: ''the behavior is consistent with revealed preferences.'' One test is boxed as it requires the additional assumption of strict convexity in addition to satisfying GARP. For each test, we report the p value on the null hypothesis consistent with revealed preference theory. Thirteen of the fourteen tests do not reject the hypothesis that consumers, on average, having an underlying utility function that displays standard properties. The exception is the test between T3 and T4 in the last column that is based on the assumption of convexity. To examine this violation in more detail, we note that if preferences are convex, then by revealed preference, individuals who would have donated less than €50 in T3 are expected to donate no more than €50 in T4. Hence, relative to T3, there ought to be relatively more donations given below or at d g ¼ €50 in T4. In the data there is, however, a bunching of donations in T4 relative to T3 slightly above d g ¼ €50, and a fall in the proportion of donations given below €50, that is, we find that donors prefer to give incrementally above €50 when faced with the non-convex budget set (perhaps to avoid the appearance of being ''cheap''). Individual violations In our between-subject design, we do not observe the same consumer making multiple choices under alternative budget sets. To detect individual violations of GARP, we propose a novel approach based on the estimate for each individual i, whose actual choice we only observe in treatment t, for what she would have donated in the relevant counterfactual treatment t 0 6 ¼ t based on the predictions from a hurdle model. This takes explicit account of the fact that the initial decision to donate (D i ¼ 0 or 1) may be separated from the decision of how much to donate: the choice of d r conditional on D i ¼ 1. A simple two-tiered model for charitable giving has, as a first stage, a probit model of giving. At the second stage, we assume that donations received from individual i are log normally distributed conditional on d ri [ 0. The maximum-likelihood estimator of the second-stage parameters is then simply the OLS estimator from the following regression: where T i is a dummy for any treatment T i that the individual was assigned to (T2-Testing consumer theory… 99 Hypotheses being tested in columns (1)-(3). They describe behavior that is, on average, consistent with revealed preferences. P value on relevant test in brackets below. The test in the column (3) requires the assumption of convexity on consumer preferences. The tests of proportions are based on all mail out recipients Testing consumer theory… 101 T5). We estimate the coefficients relative to a control treatment for each treatment separately. 3 We also control for the following individual characteristics X i , to reduce the sampling errors of the treatment effect estimates: whether recipient i is female, the number of ticket orders placed in the 12 months prior to mail out, the average price of these tickets, whether i resides in Munich, and a dummy for whether the year of the last ticket purchase was 2006. We calculate robust standard errors. More details of the procedure are provided in the Technical Appendix. In a second step, for each individual and treatment that this individual was not in, we predict her donation amount based on her individual characteristics, fictive treatment assignment, and the coefficient estimates from the first stage. We use this comparison between one actual treatment t and one predicted counterfactual treatment t 0 as the basis of tests for individual violations of revealed preference theory. 4 There are 10 such pairwise comparisons, as shown in Table 4. These are analogous to a subset of the tests performed in Table 3, namely those for which the budget sets intersect. Column 1 shows the number of violations of revealed preference theory for each pairwise comparison of treatments. We also show the proportion of violations defined as the number of violations divided by the number of positive actual donations that fulfill the first part of the condition. 5 Both measures have been previously used in the literature as measures of goodness of fit in tests of revealed preference (Gross 1995). Across pairwise comparisons, the proportion of violations varies. To provide a sense of the magnitude of such violations, Column 2 shows the average donation given among violators of GARP and a 95% confidence interval. The first row shows that individuals that violate GARP and donate less than €50 in T4, on average, actually donate €49.5. Hence, there are a small number of violations of this prediction of revealed preference theory, and the magnitude of the violations is small. In contrast, the fifth row shows that individuals that violate GARP and donate more than €40 in T5, on average, actually donate €68. Hence, for this test, there are both a relatively large number of violations and those violations are quantitatively large. For comparisons involving the income treatment T5, Column 3 restricts the sample to high valuation recipients who, based on their predicted donation from (2), would likely donate more than €20 even absent any match, to avoid confounding the comparisons with a change in the identity of the marginal donor. For these donors, the treatment corresponds to a de facto increase in income rather than a conditional increase in income as they would have donated some positive amount in any case. When focusing on high valuation donors, the number of violations falls considerably. This highlights that some of the earlier violations are likely driven by changes in the composition of donors across treatments. In particular, there are likely to be low valuation donors that give positive amounts in the income treatment T5 but that would not have donated in any other counterfactual treatment. The number of violations is based on recipients that responded with some positive donation in their assigned treatment. The percentage of violations is the number of violations divided by the number of individuals that fulfills the first part of the condition (N given in square brackets). In Columns 1 and 4, the proportion of violations is the number of violations divided by the total number of positive donations given in the treatment from which actual (and not predicted) donations are used. Column 2 shows the predicted donation in each pairwise comparison among those individuals that violate the predictions of revealed preference theory. The pairs in Column 3 are restricted to those that are predicted to give higher than average amounts (absent any match). In Column 4, we form predicted donations by regressing the log of donations received on observable characteristics of the recipient but not the treatment dummy Testing consumer theory… 105 To summarize, the behavior of 88 individuals is predicted to violate revealed preferences (out of 466), 6 while at least 80% of recipients' behavior is consistent with GARP. Whether this is a large or small number depends on the power of our tests, which, in turn, requires a specific alternative hypothesis to be specified (Varian 1982;Bronars 1995). On the one hand, in contrast to non-experimental methods, our field experiment allows us to engineer large changes in relative prices holding everything else equal. This improves the power of our test. On the other hand, the bundle at which the budget sets intersect in any two treatments in our design is distant from the bundle chosen on average in the treatments, thus lowering the power of our test. The extent to which these factors offset one another varies across each of the pairwise comparisons in Table 4. To provide a sense of which of the pairwise comparisons are most informative, we consider the following alternative hypothesis. We generate predicted choices for each donor by first estimating a specification analogous to (2) but excluding the treatment dummy. Column 4 of Table 4 then shows the number and percentage of violations of GARP that would have occurred under this alternative hypothesis. For eight out of the ten pairwise comparisons, the number of actual violations is equal or smaller than the number of violations based on this alternative, in some cases by orders of magnitudes, suggesting that these pairwise comparisons are powerful tests of GARP. More details of this test are provided in the Technical Appendix. Conclusions We have presented evidence from the first large-scale natural field experiment designed to shed light on whether consumer behavior is consistent with the predictions of revealed preference theory. We do so in the context of a field experiment on charitable giving which allows us to vary budget sets experimentally in a straightforward and very natural manner. We find that consumer behavior, on both the extensive and intensive margins of charitable giving, can be rationalized within a standard model of consumer choice in which individuals have preferences over their own consumption and their contribution towards the charitable project. The behavior of at least 80% of recipients is in line with them adhering to GARP. In short, in a real-world static environment where participants make simple decisions they are familiar with, the predictions of microeconomic theory work well in explaining the observed choices of individuals. author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
2018-12-16T07:06:24.594Z
2017-10-13T00:00:00.000
{ "year": 2017, "sha1": "023ccfd74c07b34b6596c35e4ad61e1e0b604931", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40881-017-0040-3.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "85a8442103a3a48a61a1312d55b25f457ff4be3d", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics", "Medicine" ] }
119197838
pes2o/s2orc
v3-fos-license
CP-violation in Compton scattering I consider Compton scattering off the nucleon in the presence of $CP$ violation. I construct the Compton tensor which possesses these features and consider low energy expansion (LEX) of the corresponding amplitudes. It allows to separate out the Born contribution which only depends on the static properties of the nucleon, such as the electric charge, the mass, the magnetic moment, and the electric dipole moment (EDM). I introduce new structure constants, the $T$-odd nucleon polarizabilities which parametrize the unknown non-Born part. These constants describe the response of the $T$-violating content of the nucleon to the external quasistatic electromagnetic field. As an estimate, I provide a HBChPT calculation for these new polarizabilities and discuss the implications for the experiment. I. INTRODUCTION The first proposal of experimental search for CP violation effects in atoms was made almost 40 years ago [1]. The modern advanced experimental techniques realized in the experiments on electron's electric dipole moment (EDM) are based on that idea and have the sensitivity of d e ∼ 10 −26 e cm [2]. Apart from the electron EDM, experimental searches for the EDM of the neutron are on-going [3]. Current sensitivity allows for detection of electric dipole moment (EDM) of the neutron at the level of d n ∼ 10 −26 e cm. From theoretical point of view, a non-zero EDM could imply non-zero values for the QCD θ-term as the latter can induce an EDM [4]. An attractive idea to enhance the experimental sensitivity to the electron EDM was proposed in [5]. In an atom, the electrons are moving in the electromagnetic field of the nucleus. If the electron possess an EDM, the electric Coulomb field of the nucleus would induce a magnetic dipole moment, proportional to the electron EDM and the electromagnetic field strength, leading to the magnetization of the sample. Since the electromagnetic field strength inside the nucleus can be several orders of magnitude larger than those achievable in the experiment, the effect of a non-zero electron EDM is expected to be magnified. Due to chaotic relative orientation of atoms in a sample, these elementary magnetisations sum up to zero, therefore one would need a polarizing electric field to be applied to the sample, in order to observe the sample magnetization. At the same time, the authors of [5] indicate that the effect of these atomic T -odd polarizabilities might interfer with the effect of nuclear EDM, and has to be taken into account. If a non-zero neutron EDM is discovered in the near future, further tests of our understanding of QCD and fundamental interactions will bring us to the study of the microscopic structure of the * Electronic address: gorshtey@caltech.edu CP -violating content of the nucleon, which can be described in terms of T -odd polarizabilities of the nucleon. In this paper I address the following questions. Can the presence of these CP -violating structure constants lead to a substancial interference with the measurement of the EDM? How and whether they can be measured? The concept of the polarizabilities was first introduced in classical electrodynamics and characterizes the ability of the elementary charges within a given system to be displaced from their positions in the presence of an external electric field. The electric dipole moment resulting from such a displacement is proportional to the strength of the applied field, and the coefficient of proportionality is called the (electric dipole) polarizability. This constant quantitatively describes the forces that put the system together. For instance, the atomic or molecular electric dipole polarizability is known to be of the order of the atom's or molecule's volume [6]. Instead, the nucleon electric dipole polarizability α N ∼ 10 −3 fm 3 is only about 1/1000 of its volume, V N ∼ 1fm 3 , which characterizes the strong forces that hold the proton together considerably stronger than the electromagnetic forces holding the electron in the atom. The T -violating polarizabilities of the nucleon result from two pieces: the short range T -violating physics (Tviolation is generated well above the electroweak scale by unknown new physics) responsible, for instance, for the QCD θ-term, and the (mostly) long range pion physics which is quite analogous to the usual Compton scattering case. Thus, the natural size of the nucleonic T -violating polarizabilities is expected to be where g 0 10 −11 is the strength of the θ-term and δ T denotes a T -violating nucleon polarizability. One can compare this to the estimates for the atomic CP -violating polarizability β CP [5], where atomic units were adopted, and d n is the EDM of the neutron. II. COMPTON SCATTERING AT LOW PHOTON ENERGIES Real Compton scattering can be described, under the assumption of invariance under parity, charge conjugation and time reversal, by means of 6 structure dependent amplitudes A i (ω, θ), i = 1 . . . 6, with ω = ω the c.m. energy of the initial and outgoing photons and θ being the c.m. scattering angle: where ε,k ( ε ,k ) stand for the polarization vector, direction of the initial (final) photon, and σ is the spin polarization of the nucleon. Following Refs. [7], the functions A i (ω, θ) can be expanded into a series in powers of (small) photon energy ω: A c.m. 5 (ω, θ) = e 2 µ 2 For each Compton amplitude, the leading terms in the ω expansion are given by model-independent Born contributions which are completely defined by the static properties of the nucleon as a spin-1/2 particle with the electric charge e N , anomalous magnetic moment κ N (magnetic moment µ N = e N + κ N ) and mass M N [7]. The higher order terms describe internal structure-dependent effects and are parametrized in terms of 6 polarizabilities. Two of them, α E and β M , are spin-independent electric and magnetic polarizabilities which enter the amplitude at O(ω 2 ) and measure the deformation of the charge and magnetisation distributions in the presence of quasistatic electric E and magnetizing H external fields, with d( m) denoting the induced electric (magnetic) dipole moment. The other four polarizabilities γ i , i = 1 . . . 4 describe the response of the spin-dependent distributions inside the nucleon to the quasi-static external field. For example, the polarizabilities γ 1,3 quantify the induced spindependent electric dipole moment in the external magnetic field, Similarly, the polarizabilities γ 2,4 quantify the magnetic dipole moment induced in the external electric field: III. COMPTON SCATTERING WITH P AND CP -VIOLATION Quite in the spirit of the previous section, we will construct the Compton amplitude which violates both parity and time-reversal (and thus CP ). In the above formula, I use the notation as close to the CP -even Compton scattering as possible. All the structures we are interested in should contain at most one spin vector since one can always reduce the number of the σ's in a product through σ a σ b = δ ab + i abc σ c and the minimal power of photon momenta. I do not change the dependence of the structures in Eq.(4) on the photon polarization vectors, if possible, but modify the dependence on the photons' momenta and nucleon spin such that the result is CP -odd. The first two structures in Eq.(8) are obtained by multiplying the corresponding C, P and T conserving structures of Eq.(4) by the explicitly P and T violating scalar σ · (k −k ). The third structure results from substituting the P -even, T -odd spin vector σ by P -odd, T -even vector (k −k ). The fourth structure in Eq.(4) does not allow for a modification to obtain a CP -odd structure distinct from that coming with A T / 1 , therefore I use another structure instead. The last two structures are obtained from the corresponding Compton structures by changing the relative sign between two terms. In this way, these combinations become T -odd but are P -even. For completeness, I also give here the amplitudes which are P -odd and T -even, Together with the C, P , and T even Compton amplitudes of Eq.(4) and T -odd amplitudes of Eq.(8), these amplitudes form the complete basis for real Compton scattering with 2 4 = 16 possible polarization states. IV. LOW ENERGY EXPANSION AND CP -ODD POLARIZABILITIES OF THE NUCLEON The next step is to calculate the Born part of the Tviolating Compton scattering amplitude. This Born contribution corresponds to an elastic absorption (emission) of the initial (final) photon by either the initial or final nucleon with the single nucleon propagating in the intermediate state. For such an amplitude to violate time-reversal one needs T -violating photon coupling to the nucleon. This can be arranged by including a nonzero EDM term into the electromagnetic vertex of the nucleon, where the dimensionlessd N is the electric dipole moment (EDM) of the nucleon measured in units of the nuclear magneton e 2M N , and the index N = p, n indicates whether the nucleon is the proton or the neutron, respectively. In the following, I will use the EDM in the usual units, The direct calculation of the diagrams in Fig.1 leads to the following results: where the four constants were introduced, the T -odd, Podd polarizabilities of the nucleon δ T i , i = 1 . . . 4. As in the case of P -even T -even Compton scattering, the spin independent polarizability parametrizes the term which is quadratic in photon energy, while the spin dependent ones come at order ω 3 . One should note that the fact that the amplitudes do not possess definite crossing symmetry (ω → −ω) is due to the use of the c.m. frame which makes the direct and crossed channels asymmetric by imposing that the nucleon in the intermediate state is at rest in the s-channel, p + k = 0, while it is not the case for the u-channel, p − k = 0. The polarizabilities introduced above can be interpreted as the deformation of the system under the influence of the external electromagnetic field, as it was done for the usual Compton scattering. The polarizability δ T 1 quantifies the electric dipole moment induced by the gradient of the external electric field in the direction of the spin, and similarly for the polarizability δ T Furthermore, there is one polarizability which characterizes the electric dipole moment induced by the external magnetic field (without spin), and finally, another polarizability which quantifies the electric dipole moment induced by the gradient of the projection of the external electric field onto the nucleon spin, V. HBCHPT CALCULATION OF THE T -ODD POLARIZABILITIES The QCD θ-term leads to a P -odd T -odd coupling of the pion to the nucleon, The contribution to the polarizability δ T 3 arises due to neutral pion exchange in the t-channel, as shown in Fig. 2, for which one needs the anomalous π 0 γγ vertex provided by Wess-Zumino-Witten Lagrangian, with F π the pion decay constant. Furthermore, we will need the usual CP -conserving pion-nucleon Lagrangian. The lowest order ChPT Lagrangian in the heavy baryon formalism is well known and we refer the reader to Ref. [8] for the details. The representative diagrams at one loop are shown in Fig. 3. Keeping the leading terms in ω/M N only, we obtain the following results for the T -odd polarizabilities: where g A stands for the axial coupling of the nucleon. We note that that the polarizability δ T 4 arises due to nucleon recoil effects and is thus of order ω/M N . VI. EXPERIMENTAL ACCESS TO THE T -ODD POLARIZABILITIES In principle, one might try to measure these new structure constants of the nucleon using the well established experimental techniques used in the EDM-type experiments. In such experiments, one measures the difference in the precession frequency of the spin in the external magnetic and electric fields depending on the field orientation. However, these experiments use static fields. Under these conditions, Compton effect is undetectable, as the corresponding Compton frequency shift is ∼ ω/M N . As one can see from Eqs. (11,17), the polarizabilities contributions arise as corrections in powers of ω/m π to the leading Born contributions. To see these corrections one has to go to photon energies (frequencies) comparable to the pion mass. Such experimental conditions may be accomplished in a Compton scattering experiment. One of the possibilities would be to scatter circularly polarized photons off unpolarized nucleon target. One can flip the polarization of the photon and without detecting the polarization state of the photon in the final state, measure the difference in the signal. The general expression of such a single spin asymmetry in terms of the amplitudes defined above is somewhat lengthy. Making use of the LEX of the PCTC Compton amplitude for the proton, one has where the LEX of the amplitudes A 1 and A T 3 / are given in Eqs. (4) and (11), while the LEX of the P -odd amplitude A P 3 / can be found in Ref. [9]. One notices that this asymmetry obtains contributions both from PVTC and PVTV amplitudes. The contribution of the former originates mainly from the P -odd combination of Compton helicity amplitudes |T 1, 1 The main source of the latter of the two is the combination of the backward-dominant helicity amplitudes |T −1,− 1 2 ;1, 1 2 | 2 −|T 1, 1 2 ;−1,− 1 2 | 2 which is non-zero in the presence of both P -and T -violation. The parity-violating asymmetry was first considered in [9] and it is expected to be of order 10 −8 at forward angles. If CP -violation is due to a non-zero QCD θ-term that is very tightly constrained by the experimental limit on EDM, the expected value of the single spin asymmetry in Eq. (18) is for ∼ 100 MeV photons and very backward angles. The effect is quadratic in the photon energy ω. Although this effect is tiny as compared to the parity-violating contribution, by going to very backward angles this latter is highly suppressed due to the cos 4 θ 2 factor in front. It furthermore has a cubic dependence on the photon energy [9]. Another important background is represented by the analyzing power of Compton scattering. Experimentally, it is impossible to achieve a 100% circular polarization. The remaining linear polarization component can lead to a similar T -odd observable that does not require Tviolation, but arises from the final state interaction. One has for the Compton cross section [12] with σ 0 the usual unpolarized differential cross section and φ the angle between the linear photon polarization direction and the reaction plane. Finally, λ represents the degree of linear polarization, and P the analyzing power. The analyzing power arises as an interference between the purely real leading order Compton amplitude (below the pion threshold), and the imaginary part of the nextto-leading order Compton amplitude, as shown in Fig. 4. Inserting the leading in LEX Thomson term in place of the blobs in the figure, we obtain: where the energy dependent factor is due to the phase space. The analyzing power P is then obtained from interference terms like The above formula gives an adequate result for the leading contribution at low energies, and the corrections due to imaginary parts of other amplitudes arise at the order ω 3 . Assuming the degree of linear polarization in the photon beam to be of order 1%, and going to very backward angles, so that cos θ 2 ≤ 0.1, we see that such QED final state interactions can lead to asymmetries of the order As one can see, the analyzing power represents a very substantial background for a measurement of a CPviolating Compton scattering, as well as that of the parity-violating Compton scattering as proposed in [9]. This background can be distinguished experimentally by measuring the complete azimuthal angle dependence of the considered single spin asymmetry. The final result is given by a cosine-modulated FSI contribution plus a constant term in φ that is the P -and CP -odd pieces. These estimates indicate that whenever CP -violation in Compton scattering is due to mechanisms that can directly contribute to the EDM, as well, the corresponding asymmetries cannot exceed roughly 10 −11 at energies just below the pion threshold. VII. MODEL INDEPENDENT CONSTRAINTS ONTO CP -VIOLATION IN COMPTON SCATTERING I will next examine the upper limit constraint on CPviolation in spin-independent Compton scattering. As a possible, although exotic scenario, one can assume that an unknown source of CP -violation may exist, that generates the T -odd Compton amplitude but is for some reason forbidden to show up in just one photon vertex (for instance, the spin-independent CP -odd term is not present in the one photon coupling, but arises in Compton scattering). An example of such a Lagrangian is a dimension-7 operator where Λ represents the scale of the unknown CPviolating New Physics that is integrated out and is replaced by the above effective vertex, and c(Λ) the corresponding Wilson coefficient. This operator generates the non-Born part of the amplitude A T / 3 only, since the EDM is supposed not to obtain contribution from this mechanism, where the Wilson coefficient should be taken at low energy. The EDM then arises at one-loop level as a QED radiative correction, as shown in Fig. 5 δ such that we obtain a limit on this New Physics contributions from the EDM, The loop calculation contains a quadratic divergence due to the neglection of the vertex structure within an effective field theory treatment. To obtain an order-ofmagnitude estimate of this loop contribution, I use the "naive dimensional analysis" method [11] that is based on the dimensional regularization approach that ensures that the only mass scales arising in such a calculation are physical particle masses. The quadratic divergence itself should cancel exactly, once one specifies the underlying theory that is renormalizable. Unless there exists a symmetry that enforces the exact cancellation, at low photon energies this cancellation should not occur at 100% level. Then, the naive dimensional analysis estimate is still adequate. Combining this limit with Eqs. (25) and (18) we arrive to the following upper limit for the CP -violating asymmetry generated by the New Physics: To arrive to this result I neglected the running of the Wilson coefficient c(Λ). This running may indeed be substantial, but one would rather expect a logarithmical, and not quadratic running, therefore the 1/Λ 2 will be dominant for the dependence of the above limit on the New Physics scale. An accurate treatment can indeed change the estimate somewhat, and I leave this investigation for a future work, as this calculation goes beyond the scope of the present article. Since the pion is the lightest hadronic state, this result means that the calculation of the CP -odd polarizabilities of the nucleon in the ChPT with pions of Eq. (17) represents the dominant contribution due to the 1/Λ 2 suppression of heavier particles contributions. In other words, if a substantial CP -violating mechanism in nucleon spinindependent Compton scattering exists, it should involve some light particles, like pions or eta's for instance, to be observable at the same level as the CP -violating πN N coupling contribution considered here in greater detail. VIII. CONCLUSIONS In summary, I considered Compton scattering in the presence of CP -violation. I defined the Compton amplitudes that possess this feature and applied the Low Energy Theorem to this amplitude. After the separation of the Born contribution that is defined by static properties of the nucleon and its EDM, I parametrized the unknown non-Born part by introducing the modelindependent structure constants, the nucleon T -odd polarizabilities. I calculated these polarizabilities in the assumption that the CP -violation is due to non-zero QCD θ-term that generates a CP -violating πN N coupling. I furthermore proposed an observable in Compton scattering that is directly sensitive to CP -violation that is a single-spin asymmetry with circularly polarized photon beam. Within the model for the T -odd polarizabilities used in this work, I estimated such an asymmetry to be of order 10 −11 × ω 2 m 2 π , ω being the photon energy, and found that it is picked at backward angles. The above limit originates from the experimentaql limits on EDM translated into the strength of the QCD θ-term. Experimentally, the CP -odd asymmetry is always accompanied by the P -violation contribution to Compton scattering which was estimated in the literature as 10 −8 ω 3 m 3 π and is suppressed at backward angles as cos 4 θ 2 . I also considered a background process that involves the linear polarization component in the photon beam which can lead to azimuthal angle-dependent asymmetry due to final state interactions that generate an imaginary part of Compton scattering. At very backward angles, such FSI can lead to asymmetries of order 10 −7 × cos 2φ, with φ the angle between the direction of the linear polarization and the scattering plane.
2008-03-03T22:44:20.000Z
2008-03-03T00:00:00.000
{ "year": 2008, "sha1": "b8b3b9e961ea0554333b8d898786f2c15aa35b12", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0803.0343", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b8b3b9e961ea0554333b8d898786f2c15aa35b12", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
118351686
pes2o/s2orc
v3-fos-license
Estimate of the fraction of primary photons in the cosmic-ray flux at energies ~10^17 eV from the EAS-MSU experiment data We reanalyze archival EAS-MSU data in order to search for events with anomalously low content of muons with energies E_mu>10 GeV in extensive air showers with number of particles N_e>2*10^7. We confirm the first evidence for nonzero flux of primary cosmic gamma rays at energies E~10^17 eV. The estimated fraction of primary gamma rays in the flux of cosmic particles with energies E>5.4*10^16 eV is (0.43 +0.12-0.11)%, which corresponds to the intensity of (1.2 +0.4-0.3)*10^{-16} cm^{-2} s^{-1} sr^{-1}. The study of arrival directions does not favour any particular mechanism of the origin of the photon-like events. Introduction The study of the primary mass composition of ultra-high-energy (UHE) cosmic rays (CR) is one of the topical problems of astroparticle physics because these experimental results are of crucial importance for understanding the theory of both cosmic-ray generation in their sources and their subsequent propagation to the Earth. Low UHECR intensity makes their study by direct methods impossible, so that the only available method is the study of extensive air showers (EAS). The dominant part of EAS is caused by primary nuclei (from protons to iron), however, there is a considerable interest to possible presence of very different particles, e.g. UHE gamma rays, among them. First works on the subject appeared already a half-century ago (see e.g. Ref. [1]) but definitive quantitative results are still missing (cf. a review [2] and references therein). Indeed, the highest-energy cosmic photons firmly detected had the energy of ∼ 50 TeV [3]. The searches for gamma rays in the energy ranges 3 × 10 14 eV E 5 × 10 16 eV (the EAS-TOP [4], CASA-MIA [5] and KASCADE [6] experiments) as well as at E 10 18 eV (the Haverah Park [7], AGASA [8,9,10], Yakutsk [11,12], Pierre Auger [13,14] and Telescope Array [15] experiments) did not find any signal and resulted in upper limits on the photon flux only. A few claims of the experimental detection of 10 14 eV E 10 17 eV photons (Mt. Chacaltaya [16], Tien Shan [17], Yakutsk [18] and Lodz [19]) had low statistical significance. At the same time, a certain flux of UHE photons is predicted in many models of both conventional and "new" physics. In particular, the flux of secondary photons from interactions of extreme-energy particles with cosmic background radiation, the so-called Greizen-Zatsepin-Kuzmin (GZK) photons, may serve as a tool to distinguish various models of cosmic rays at energies 5 × 10 19 eV because the photon flux is very sensitive to the primary composition at these energies: predominantly light composition at GZK energies results in a much higher flux of secondary photons. Given the present contradictory situation with the mass composition at UHE (see e.g. Ref. [20] for a detailed review and Ref. [21] for a brief update), searches for GZK photons are now considered very important. Also, a significant contribution to the UHE gamma-ray flux is predicted in particular top-down mechanisms of CR origin ( [22] and references therein), in particle-physics models with Lorentz-invariance violation [23] and in models with axionphoton mixing [24]. One of the most promising approaches to the search of primary gamma rays is the study of the EAS muon component. The number of muons in a gamma-ray induced EAS is an order of magnitude smaller than in a usual hadronic shower. Therefore, one may hope to find photon showers by selecting those which have unusually low muon content. In the present work, we study the muon content of showers with the estimated number of particles N e > 2 × 10 7 and zenith angles θ < 30 • detected by the EAS-MSU array [25] in 1982 -1990. We demonstrate that the number of muonless events exceeds significantly the background expected from random fluctuations in the development of showers caused by primary hadrons. This result may be interpreted as an indication to the presence of gamma rays in the primary cosmic radiation with energies of order 10 17 eV which confirms and strengthens the first evidence for UHE cosmic photons [26]. The rest of the paper is organized as follows. In Sec. 2, we briefly review the experimental setup (Sec. 2.1), then discuss the data set we study, and muonless events in particular (Sec. 2.2). Sec. 3 is devoted to the estimate of the number of background muonless events for hadronic showers (Sec. 3.1) and to the derivation of the estimated photon flux under the assumption that all muonless events not accounted for by the hadronic background are caused by primary gamma rays (Sec. 3.2). Possible systematic errors in the determination of the flux are discussed in Sec. 3.3. In Sec. 4, we present a detailed study of the distribution of the arrival directions of muonless events on the celestial sphere and test various models of the origin of primary photons. We put our results in the context of the present-day state of the art and briefly conclude in Sec. 5. Experiment and data 2.1 The EAS-MSU array The description of the EAS-MSU array is given in [25]. The array had the area of 0.5 km 2 and contained 77 charged particle density detectors (consisted of the Geiger-Mueller counters) for determination of the EAS size N e employing the empirical lateral distribution function [27] and 30 scintillator detectors which measured particle arrival times necessary for determination of the EAS arrival direction. In addition to the surface detectors which recorded mostly electron-photon component of an EAS, the array included also four underground muon detectors, also consisted of Geiger-Mueller counters, located at the depth of 40 meters of water equivalent. These detectors recorded muons with energies above 10 GeV. A muon detector with the area of 36.4 m 2 was located at the center of the array while other three stations had the area of 18.2 m 2 and were located at the distances between 150 m and 300 m from the center (see Fig. 1). To select the sample of showers with the number of particles N e > 2 × 10 7 which we use in this work, 22 scintillator detectors, each of the area of 0.5 m 2 , were used. The scintillator detector threshold was set at the level of 1/3 of a relativistic particle. The temporal resolution was ∼ 5 ns. The 22 stations form 13 systems of 4-fold coincidences between counters located at the vertices of tetragons with sides between 150 m and 300 m which allowed one to select efficiently the showers on the full array area. The scintillator detectors were located at the same points as the Geiger-Mueller counters. The master criterion was determined by the firing, in the time gate of ∼ 6 µs, of at least one of the 4-fold coincidence systems. To reduce the number of sub-threshold events which still satisfy the master conditions, the express-analysis of the number of fired Geiger-Mueller counters was invoked. In each case, it was required that at least 4 of 22 counters in the selection system recorded the density exceeding 1 particle per square meter. With these selection criteria implemented, the probability of detection of a shower with N e > 2 × 10 7 falling to any place of the array was not less than 95%. The position of the shower axis was determined with the precision of ∼ 10 m. The precision of determination of the arrival direction was ∼ 3 • . The number of particles in the shower was determined with the accuracy ∼ (15 − 20)%. Figure 2. The distribution of muonless events in the distance R between the shower axis and the muon detector. Line: data; shadow: expectation for hadronic primaries. The data set and muonless events The presence of muon detectors in the EAS-MSU array allows to search for primary gamma rays. The method is based on the fact that, for N e 10 7 and for an hadronic primary, it is highly unprobable to have zero muons in the central, 36.4 m 2 , detector if the shower axis is within ∼ 240 m from it. At the same time, these muonless events are fully consistent with the conjecture of primary gamma rays. The total number of events with N e ≥ 2 × 10 7 in the data set is 1679; of them 48 are muonless. Fig. 2 presents the distribution of muonless events in the distance R between the shower axis and the muon detector. Most of the muonless events correspond naturally to large R; however, there are a certain number of events close to the axis which are very difficult to explain by random fluctuations of the hadronic background. One should note that the real number of muonless events is larger than observed because of the non-EAS background which results in firing of each counter in the central muon detector with average frequency of 4.6 Hz. In three other muon detectors, the frequency of random firing was 2 to 3 times higher, and in this work, we use only the data of the central detector. It consisted of 1104 counters. For the time of EAS detection ∼ 15 µs, one expects 0.076 random firings. Therefore we assume that the probability of absence of the random firing was 0.93. To obtain a very rough estimate of the probability to have a muonless hadronic event, one may start with the (experimentally known) mean muon lateral distribution function [27] and estimate the expected muon density ρ µ (N e , R) for a given core distance R. Then, by making use of the Poisson distribution, one may calculate the probability P (m = 0) to have no muons in the detector at this distance. In Fig. 3, the distribution of m = 0 events in P (m = 0) is shown. The tail at low P (m = 0) indicates that there might be a problem in explaining the observed number of muonless events within the standard model of the shower development. Estimates of the gamma-ray flux To quantify the observed discrepancy more precisely, we performed Monte-Carlo simulations of proton-induced showers and compared the number of muonless events in data and in simulations. Modelling of artificial showers For the shower simulations, we used the AIRES v. 2.6.0 [28] simulation code, whose choice was determined primarily by its speed. We used the high-energy hadronic interaction model QGSJET-01 [29]. The primary protons were thrown with zenith angles 0 • ≤ θ ≤ 30 • and with energies between 3 × 10 16 eV and 2 × 10 17 eV, assuming the integral spectral index of 2.0. Without the account of fluctuations, the energy of a N e = 2 × 10 7 proton shower would be equal to E ∼ 10 17 eV; however, the fluctuations reduce this value. For the study, showers with N e ≥ 2 × 10 7 have been selected; Fig. 4 gives the distribution of the primary energies of the selected artificial showers. In this way, the total amount of 15000 artificial showers were simulated. Estimate of the fraction and flux of gamma rays The general assumption behind our estimate of the gamma-ray flux is that all muonless events, not accounted for by fluctuations of hadronic showers, are caused by primary gamma rays. Therefore, the central moment of the estimate is the calculation of the expected number of background muonless events from the simulated proton-induced showers. The probability of a zero muon detector reading, m = 0, was estimated under assumption (see Ref. [30] for its motivation) that the fluctuations of muon density in EAS may be represented as a superposition of (a) fluctuations of muon density at a given distance from Suppose that a shower axis came within the ring ∆R k = R k+1 − R k from the muon detector. Then the muon density in the ring is determined as where N µ (∆R k , i) is the number of muons in this ring and i = 1, . . . n tot is the number of the selected artificial showers. Then the probability of a zero detector reading in the ∆R k ring is where θ is the zenith angle of the shower. The total probability of a m = 0 event is where k max gives the total number of rings considered and the last factor accounts for the probability for the shower axis to hit the ∆R k ring. The results of the calculation of probability to observe a muonless event are given, for various distances from the shower axis, in Table 1 together with the number of observed and predicted muonless events in our sample of 1679 showers. The total probability to have a muonless proton-induced event within 240 m between the detector and the shower axis is 1.4 × 10 −2 which corresponds to ≈ 23 expected muonless events in the sample, to compare with 48 observed. As expected, the dominant part of the background muonless events should appear in two outer rings we considered, the same being true also for the observed events. However, the total number of the observed events is almost twice the expected one. This allows one to estimate, based on the Poisson distribution, the number S of signal photon-like events in the sample as S = 25.2 +7.2 −6.6 , which transforms into the fraction 1 = 1.50 +0. 43 −0.39 % of anomalous muonless events in the sample with N e ≥ 2×10 7 and θ ≤ 30 • . We want to identify the anomalous muonless showers with showers initiated by primary photons. To determine the fraction of these events in the energy spectrum of cosmic rays, one needs to take into account the difference in the development of showers caused by photons and protons of the same energy. The gamma-ray showers develop slower in the atmosphere and arrive younger to the surface level (the vertical atmospheric depth for EAS-MSU is 1025 g·cm −2 ). On average, for the primary energies ∼ 10 17 eV, the number of particles in a gamma-ray shower detected by the EAS-MSU experiment should be ≈ 1.86 times larger compared to the proton shower. The cut in N e we use thus corresponds, on average, to the gamma-ray energy of 5.4 × 10 16 eV. Knowing the total cosmic-ray flux measured by the EAS-MSU array [31], we determine the main result of the present work: the photon fraction, Estimate of systematic uncertainties The systematic uncertainty of our result, within the method we use, is related to the estimate of the number of background muonless events from hadronic showers. Hadronic interaction models. The largest uncertainty comes from the variety of models of shower development which predict different values of muon number in EAS. Furthermore, this difference is sensitive to the muon threshold energy, which is 10 GeV in our case. The change of the mean expected muon density in EAS by ±10% would result in the change of the number of background muonless showers in the sample by ±4. The results we quote are based on the QGSJET-01 model [29] which gives a good description of the LHC and Pierre Auger Observatory measurements of the high-energy hadronic cross section (cf. Fig. 5 of Ref. [21]) and of the LHC multiplicity distributions (see e.g. Ref. [32]); the choice of the model was also motivated by its computational efficiency. The amount of model-tomodel variations of the number of > 10 GeV muons in EAS may be estimated from Ref. [33] and from our own simulations. The effect of the change of the interaction model on our results is summarized in Table 2. We shall note that, according to experimental data on EAS development, all hadronic-interaction models currently in use underestimate the number of muons in a shower significantly. In particular, several independent indirect analyses of the Pierre Auger Observatory data indicate [37] that the real number of muons is approximately 1.5 times larger than predicted by the QGSJET II-03 model. This number is used in Table 2 and for the estimate of the systematic error; a similar result was obtained with the help of muon detectors of the Yakutsk EAS array [38,39]. The systematic error in the resulting gamma-ray flux due to the uncertainty of hadronic models is ±50%, with the upper value favoured by the experimental data. Primary composition. Assumption of the purely proton composition gives a conservative (i.e. large) estimate of the expected background of the muonless events because primary heavier nuclei produce more muons in EAS. For primary iron, the corresponding number of muons is larger by a factor of ∼ 2.5 which shifts the expected background downwards to zero. This would change our fraction and flux estimates by +90%. Large fluctuations. Since no model gives a perfect description of hadron-induced air showers, and in particular there are large uncertainties in predictions of muon number, one cannot exclude that the fluctuations of the EAS muon content might be much larger than suggested by simulations. Among theoretical approaches, the probability of occasional very low muon density in a proton shower is the highest in the model of Ref. [40] where the energy equipartition between positive, negative and neutral components of the cascade was postulated. As it has been shown in Ref. [41], in the frameworks of this model, it is possible to obtain the probability of ∼ 1% of imitation of a gamma-ray shower by a primary proton. However, this model is much less physically motivated as compared to those which are currently used in simulation codes. To summarize the discussion of systematic uncertainties, current experimental and theoretical understanding of the EAS properties suggests that the flux values we obtain are conservative, though they could become lower if physically less motivated models were used for hadronic showers. Arrival directions In this section, in order to find some hints about the origin of the events we observed, we perform various searches for deviations from isotropy in the distribution of arrival directions of the photon-like events. All the tests are performed by comparison, by means of a certain statistical procedure, of the real distribution of arrival directions with the simulated one which assumes isotropy. In all cases, the result of a test is given by the probability P that the actual distribution of events is a fluctuation of the isotropic distribution, that is for small P , the isotropic distribution is excluded at the confidence level of (1 − P ). For tests of the global (large-scale) isotropy, we use the Kolmogorov-Smirnov method (see e.g. Ref. [42]) which compares one-dimensional distributions of real and simulated events in some observable (e.g., a celestial coordinate). For searches of the local (small-scale) anisotropy, we rely on the correlation-function method which estimates how often the number of pair coincidences of directions from two catalogs (e.g., one of the arrival directions of cosmic rays and another of particular astronomical objects) in simulated samples exceeds the similar number obtained from the real data. The notion of the "pair coincidence" depends on the angular distance ∆ between the directions, so the probability P (∆) is often quoted for a certain range of ∆. The clustering properties of the sample of the directions are estimated by the same method with both catalogs being identical cosmic-ray lists. More details on the method may be found e.g. in Ref. [43]. In both approaches, we need to simulate sets of arrival directions in the assumption of the isotropic flux. These sets should take into account the experimental selection effects. For continuously operating surface detector arrays with the efficiency close to 100%, the exposure is uniform in the azimuth angle and depends on the zenith angle θ by a purely geometric factor sin θ cos θ, assuming that the incoming flux is isotropic (this is the case when one studies the energy-limited sample of cosmic rays). However, our sample is limited by N e instead of energy and, due to different age of showers coming at different zenith angles, the exposure becomes non-geometric. Based on the observed distribution of θ, we determine the acceptance factor as ∼ sin θ cos 9 θ. The distribution of events in the azimuth angle is perfectly consistent with uniform as expected. The distribution of the arrival directions on the sky, together with the one expected from exposure for the isotropic flux, is shown in Fig. 5. In the study of the arrival directions, we do not include 3 of 48 events observed in 1982 for which the determination of geometry is uncertain. Possible scenarios for UHE photons Among possible mechanisms of the origin of UHE gamma rays, we consider separately those which do not require deviations from the standard particle-physics and astrophysical concepts (we will call these scenarios conventional) and those which require the presence of particles and/or interactions beyond the Standard Model of particle physics (these will be called "new-physics" scenarios). Note that high-energy photons interact with cosmic background radiation efficiently. Assuming standard physics, the energy attenuation length for a ∼ 10 17 eV photon is as low as ∼ 35 kpc due to efficient e + e − pair production on the cosmic microwave background (CMB). This means that the observed photons were created in the Galaxy unless some new physics is assumed. Conventional scenarios. Scenario 1. Cosmogenic photons. UHE cosmic particles experience intense interactions with cosmic background radiation. For protons with energies above ∼ 5 × 10 19 'eV, these are dominated by the GZK [44,45] process of pion production through the ∆ resonance; for lower energies the dominant mechanism is the e + e − pair production. For heavier primaries at E ∼ 10 20 eV, photodisintegration effectively reduces the propagation effects to those of protons of lower energy. The secondary particles from all these interactions (pions, electrons and positrons) are the source of the so-called cosmogenic photons which appear either from subsequent pion decays or from inverse Compton scattering of e ± . There are a lot of works on the GZK photons (e.g. Refs. [46,47]); the key point of interest here is the possibility to use these ∼ (10 18 − 10 19 ) eV gamma rays as a tool to determine the composition of the bulk of E ∼ 10 20 eV cosmic rays; due to the GZK process, the flux of the secondary photons would be much higher for super-GZK protons than for heavy nuclei. Given the present-day uncertainty in the primary composition at the very end of the CR spectrum, see e.g. Refs. [20,21], this approach attracts considerable attention, though no sign of the GZK photons have been observed yet. The expected flux of the GZK photons at E 10 17 eV is far too low to explain our result; we are not aware of a calculation of the flux at lower energies, nor of the distribution of their arrival directions which however should be close to the isotropic one. Scenario 2. Direct photons from point sources. UHE astrophysical accelerators are expected to emit energetic photons born in interactions of charged particles with ambient matter and radiation. The energy of accelerated particles should therefore exceed the energy of the photons, roughly by an order of magnitude. It is presently unknown whether the acceleration of particles up to ∼ (10 17 − 10 18 ) eV may happen in any single object in the Galaxy (that is, within the propagation length of ∼ 10 17 eV photons). In any case, these objects are not expected to be numerous; we therefore expect a certain degree of clustering of the arrival directions of photons in this scenario. Galactic TeV gamma-ray sources may represent plausible candidates for the UHECR accelerators; in this case, the arrival directions would concentrante around them. "New-physics" scenarios. Scenario 3. Superheavy dark matter. While the Large Hadron Collider failed to discover easily any dark-matter candidate, models of dark matter which are beyond the reach of this machine are becoming more and more popular. In particular, the superheavy (mass M 10 18 eV) dark matter (SHDM) scenario, originally put forward [48] to explain the apparent excess of E 10 20 eV cosmic rays (presently disfavoured), has its own cosmological motivation. Its important prediction is a significant fraction of secondary photons among the decay products of these superheavy particles; these energetic photons contribute to the UHECR flux. For M 10 20 eV, the scenario is constrained, but not killed [49], by the UHE photon limits; constraints for lower M have not been studied. A characteristic manifestation of this mechanism is the Galactic anisotropy [50] of the arrival directions of photons related to a non-central position of the Sun in the Galaxy. Scenario 4. Axion-like particles and BL Lac correlations. The UHECR data set with the best angular resolution ever achieved (0.6 • ), that of High Resolution Fly's Eye (HiRes) in the stereo mode, demonstrated hard-to-explain correlations of arrival directions of E 10 19 eV events with distant astrophysical sources, BL Lac type objects [51,52], which suggest that ∼ 2% of the CR flux at these energies are neutral particles arriving from these objects. The only self-consistent explanation of this phenomenon [24] which does not require violation of the Lorentz invariance suggests that the observed events are caused by the gamma rays which mix with hypothetical new light particles (axion-like particles, ALPs) in the cosmic magnetic fields. This would allow them to propagate freely through the cosmic photon background in the form of the inert ALP and then to convert back to real photons in a region of the magnetic field close to the observer. This approach may also explain some other astrophysical puzzles. A test of this scenario may be performed by cross-correlation of the arrival directions with the same BL Lac catalog as in Refs. [51,52]. Scenario 5. Lorentz-invariance violation. There is no lack of theoretical models with tiny violation of the relativistic invariance on the market. In some of them, this effect results in efficient increase of the mean free path of an energetic photon through CMB [23]. Though these models have many free parameters, with no particularly motivated choice, one may Fig. 6 where the probability that the observed excess of pairs of events in the angular bin ∆ is plotted as a function of ∆. No significant clustering of events is found. 2. Test of scenario 2: Galactic TeV sources. Fig. 7 represents the P (∆) function for cross-correlations of the arrival directions of the photon-like events with positions of Galactic TeV sources from the TeVCat catalog [53], as of May 2013. No sign of correlation is seen. 3. Test of scenario 2: Galactic-plane correlation. Galactic UHECR accelerators of a yet unknown type are still expected to concentrate along the Galactic plane, and the distribution of events in the Galactic latitude b is a model-independent test of this scenario. Fig. 8 illustrates that the distribution of the photon-like events in b is consistent with that expected for an isotropic flux (the Kolmogorov-Smirnov probability P KS ≈ 0.66). 4. Test of scenario 3: Galactic anisotropy. The SHDM-related Galactic anisotropy should reveal itself in the dipole excess seen in distribution of events in the distance to the Galactic Center. Fig. 9 demonstrates that no such excess is seen (P KS ≈ 0.61). 5. Test of scenario 4: BL Lac correlations. The HiRes BL Lac correlations [51] appeared as an excess of events close to positions of 156 bright BL Lac type objects selected from the catalog [54] by the cut on the optical magnitude V < 18 m . A subsequent study [52] suggested also a correlation with TeV-selected BL Lacs. In Fig. 10, we present the results Figure 7. The test of correlation with Galactic TeV sources: the probability P (∆) to have the observed or higher number of events within the angular distance ∆ from TeVCat [53] Galactic TeV sources as a fluctuation of the isotropic distribution. The distribution of the observed photon-like events (line) and Monte-Carlo isotropic events (shadow) in the angular distance to the Galactic Center. Figure 10. The test of correlation with BL Lac type objects: the probability P (∆) to have the observed or higher number of events within the angular distance ∆ from bright Vèron BL Lacs (sample of Ref. [51], full line) and TeVCat [53] TeV BL Lacs (dashed line) as a fluctuation of the isotropic distribution. Figure 11. The diffuse cosmic photon integral flux versus the photon minimal energy. The result of the present work is shown as a cross whose vertical line represents the error bars. Tentative detections and upper limits from other experiments are indicated by symbols: star (Tien Shan [17], detection), open star (Lodz [19], detection), gray triangle (EAS-TOP [4]), gray squares (CASA-MIA [5]), gray diamonds (KASCADE [6,55]), triangles (Yakutsk [12]), open diamonds (Pierre Auger [13,14]), boxes (AGASA [8]), large squares (Telescope Array [15]). of a similar analysis for our photon-like sample, with the same catalog of 156 BL Lacs and with an updated list of TeV BL Lacs from TeVCat [53]. No significant correlation is seen. Discussion and conclusions The place of our result among others is rather specific. All previous studies put upper limits on the photon flux or fraction for the primary energy intervals ∼ (10 14 − 5 × 10 16 ) eV and 10 18 eV. The EAS-MSU result, first reported in Ref. [26], therefore represents the first ever statistically significant detection of cosmic photons with energies above ∼ 100 TeV. In the present work, we performed the first estimate of the gamma-ray flux in the previously unstudied energy window (5 × 10 16 − 10 18 ) eV and estimated statistical and systematic errors for its value. The result is compared with limits obtained by other experiments in Fig. 11 (flux) and Fig. 12 (fraction). The fraction estimates should be interpreted with great care because they are sensitive to the energy determination of the bulk of hadronic primaries which is known to suffer from large systematic uncertainties due to lack of understanding of high-energy hadronic interactions. Contrary, the photon flux estimates are more robust because they use the primary gamma-ray energy determination and the exposure of the array only, both quantities being well understood. Therefore, the main result of this paper is the flux estimate, Eq. (1). The interpretation of the result is problematic. Leaving aside the discrepancy with the CASA-MIA result in terms of the (uncertain) gamma-ray fraction, the more robust flux Figure 12. The fraction of gamma-ray primaries in the diffuse cosmic-ray integral flux versus the photon minimal energy. Notations are the same as in Fig. 11; in addition, more Yakutsk results [11], results from Haverah Park ( [7], open squares), from reanalysis of the AGASA data ( [9], the highest-energy square like AGASA) and from a combination of AGASA and Yakutsk data ( [10], large open square) are shown. estimate, Eq. (1), does not formally contradict to any existing experimental constraint but is clearly in conflict with the general trend observed both at lower and higher energies 1 , see Fig. 11. Within conventional scenarios, these photons cannot travel for longer than a few dosen Mpc and should therefore be born in the Galaxy. However, we do not see any significant Galactic (nor any other) anisotropy in the distribution of the arrival directions. To add to the troubles, in some scenarios it would be difficult to avoid a conflict with measurements of the ∼ 1 GeV diffuse photon flux to which secondary photons from electromagnetic cascades in the Universe have to contribute. The estimates we made were obtained in the assumption that those muonless events which are not accounted for by fluctuations of hadronic showers, and only those, are caused by primary photons. This assumption is a reasonable first approximation but it suggests two directions for the future work. Firstly, gamma-ray showers have low but nonzero number of muons (the reason for appearence of muons is in the photonuclear interactions). The presence of a certain number of muonless events implies, within the photon hypothesis, that there should be an excess of muon-poor events in the data, which is yet to be tested. Secondly, there are other observables, not directly related to the muon number, which may distinguish photon showers from hadronic ones. One approach is to study the shower front curvature which is related to the depth of the maximal development of the electromagnetic cascade; it has been used to search for primary photons in the experiments which do not have muon detectors, e.g. [15]. This method is particularly prospective for the EAS-MSU data because the array was dense and the number of detector stations which recorded a N e 10 7 shower was typically large.
2013-11-11T07:39:11.000Z
2013-07-18T00:00:00.000
{ "year": 2013, "sha1": "3c59fb561442463ad5c6321509d8fd2ee12657a4", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "3c59fb561442463ad5c6321509d8fd2ee12657a4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
247994917
pes2o/s2orc
v3-fos-license
Differences in Stroke or Systemic Thromboembolism Readmission Risk After Hospitalization for Atrial Fibrillation and Atrial Flutter Background Although atrial fibrillation (AF) and atrial flutter (AFL) are different arrhythmias, they are assumed to confer the same risk of stroke and systemic thromboembolism (STE) despite a lack of available evidence. In this study, we investigated the difference in the risk of stroke or STE after AF and AFL hospitalizations. Methodology The National Readmission Database (NRD) 2018 was used to identify AF and AFL patients using appropriate International Classification of Diseases, Tenth Revision, Clinical Modification (ICD-10-CM) codes and were followed until the end of the calendar year to identify stroke or STE readmissions. Survival estimates were calculated, and a Cox proportional hazards model was used to calculate the adjusted hazards ratio (aHR) and compare the risk of stroke or STE readmissions between AF and AFL groups. Results A total of 215,810 AF and 15,292 AFL patients were identified. AFL patients were more likely to be younger (66 vs. 70 years), male (68% vs. 47%), and had higher prevalence of obesity (25% vs. 22%), obstructive sleep apnea (14% vs. 12%), diabetes mellitus (31% vs. 26%), and alcohol use (6.9% vs. 5.5%) (all p < 0.01). After adjusting for potential patient and hospital-level characteristics, there was a statistically significant decrease in one-year stroke or STE readmission risk in AFL patients compared to AF patients (aHR 0.79 (0.66-0.95); p = 0.01). Conclusions AFL patients are commonly younger males with a higher burden of medical comorbidity. There is a decrease in the one-year risk of stroke or STE events in AFL patients compared to AF. The predictors of stroke and STE are similar in both AFL and AF groups. Further studies with longer follow-up and anticoagulation data are needed to verify the results. Introduction Atrial fibrillation (AF) is associated with an increased risk of cardioembolic strokes and systemic thromboembolism (STE) [1][2][3]. Despite the common impression that AF and atrial flutter (AFL) possess a similar stroke or STE risk, the relationship between AFL and stroke/STE has been addressed only in a few studies [4,5]. Furthermore, CHA2DS2-VASc scoring has not been well established for AFL patients [6]. Although AF and AFL are distinct arrhythmias, they tend to co-exist within patients [7]. The formation of STE in AF is evidenced to be multifactorial, with one of the reasons being abnormal blood flow leading to stasis in the left atrium and left atrial appendage (LAA) [8]. Studies show a lower risk of LAA clot formation and, theoretically, a lower risk of cardioembolic stroke and STE in AFL [9]. However, the currently available evidence on this topic is inconclusive, with uncertainty in the long-term thromboembolic risk difference between AFL and AF. Therefore, the American College of Cardiology/American Heart Association/Heart Rhythm Society's 2019 focused update recommends for AFL patients the same AF stroke risk assessment and anticoagulation strategies [10]. However, in the 2019 European Society of Cardiology's guidelines for managing supraventricular tachycardia, the threshold for anticoagulation initiation in AFL patients without AF was not established [11]. Using the National Readmission Database (NRD), we investigated the difference in the risk of stroke or STE readmissions between AFL and AF. In addition, we investigated the predictors of stroke and STE rehospitalizations in both groups. Data source We conducted a retrospective cohort study using the 2018 NRD. The NRD is a database developed for the Healthcare Cost and Utilization Project (HCUP) sponsored by the Agency for Healthcare Research and Quality through a Federal-State-Industry partnership. In 2018, the NRD contained data from 28 geographically dispersed states accounting for approximately 60% of the total US resident population and 58.7% of all US hospitalizations [12]. It contains reliable, verifiable patient linkage numbers (defined as the "NRD_VISITLINK" variable within the dataset) that can track a patient across hospitals within the same state while adhering to strict privacy guidelines. The NRD comprises more than 100 clinical and non-clinical variables for each hospital stay. Each discharge is weighted to calculate national estimates. The NRD or administrative data have been previously used to provide reliable national stroke risk estimates through readmissions [13,14]. The NRD in the year 2018 contained patient and hospital-level data with up to 40 diagnoses and 25 procedures for each patient using appropriate International Classification of Diseases, Tenth Revision, Clinical Modification (ICD-10-CM) codes. Institutional Review Board approval was not required due to the deidentified nature of the data. We followed the checklist that has been recommended by HCUP for working with the NRD [15]. Study population and outcome Index AF and AFL admissions were identified by the presence of their respective ICD-10-CM codes as the primary diagnosis for that hospitalization (see Table 1 for the list of ICD-10-CM codes used in this study). Patients with secondary AFL diagnosis in the AF index admission group and patients with secondary AF diagnosis in the AFL index admission group were excluded to mitigate the overlap of AF and AFL within the same patient. We excluded patients with age ≤18 years, trauma-related readmissions, and elective readmissions. Further details regarding inclusion and exclusion criteria are shown in Figure 1. The first observed AF and AFL hospitalization in the NRD 2018 was defined as the index admission. These patients were followed until the end of the 2018 academic year. The primary outcome of our study was to identify and compare ischemic stroke and STE readmission rates after AF and AFL index admissions. The stroke or STE readmissions were identified by their ICD-10-CM codes (see Table 1) in the primary or secondary diagnosis sections of readmissions. Statistical analysis Statistical analyses were performed using the Stata software package, version 17.0 (StataCorp, College Station, TX). Stata's survey package facilitates analysis by considering NRD's complex sampling design that includes stratification, clustering, and weighting to produce national estimates. We used the chi-square test to evaluate the differences between groups of categorical variables and the Student's t-test for differences between sample means of continuous variables. Survival analysis was performed with time from index hospitalization discharge to readmission as the time variable and stroke or STE readmissions as the failure variable to produce Kaplan-Meier curves. Patients who did not experience failures were censored on day 365 after discharge. Univariate Cox regression analysis was performed to calculate the unadjusted hazard ratio (HR) for the primary outcome. Subsequently, multivariate Cox regression analysis was used to adjust for potential confounders and produce adjusted hazard ratios (aHRs). Multiple covariates were built into the model based on the clinical experience of the authors, currently available literature, and significantly associated with the outcome on univariate analysis with a p-value < 0.2. All p-values were calculated based on two-tailed tests, with 0.05 as a threshold for statistical significance. Baseline characteristics of index admissions We identified 215,810 weighted index admissions with AF and 15,292 with AFL (See Table 2 for demographic data). The AFL group was younger than the AF group (mean age of 66.7 years vs. 70.1 years; p < 0.01), with approximately 70% of patients <75 years of age, and they were more commonly associated with males. There was no difference in the mean Charlson comorbidity score between the AFL and AF groups, but the AFL group was associated with a lower CHA2DS2-VASc score compared to the AF group (mean score of 2.8 vs. 3.3; p < 0.01). Hypertension, diastolic heart failure, a history of stroke, and hyperthyroidism were more commonly associated with the AF group, whereas obesity, obstructive sleep apnea, diabetes mellitus, and nicotine dependence were associated with the AFL group. *ICD-10-CM codes were utilized to identify the comorbidities, which are reported in Table 1. Difference in the risk of stroke or STE readmissions After adjusting for potential confounders (age, gender, hypertension, diabetes, history of prior stroke, chronic kidney disease (CKD), obesity, obstructive sleep apnea, Charlson comorbidity score, CHADS VASc score, malignancy, smoking, and alcohol status), AFL was associated with lower hazards of stroke or STE readmissions for a year (1.4% for AFL vs. 2.1% for AF; aHR = 0.79 (0.66-0.95); p < 0.01) ( Table 3). Figure 2 shows the Kaplan-Meier survival curves comparing the AF and AFL groups for the primary outcome. Predictors of stroke or STE readmissions after AF and AFL Age, female gender, higher Charlson comorbidity score, higher CHA2DS2 VASc score, hypertension, diabetes, coronary artery disease, history of stroke, CKD stage ≥3, and peripheral vascular disease were identified as independent predictors of stroke or STE in the AFL patients. Similar factors were identified in AF patients. However, hypertension, diabetes mellitus, and peripheral vascular disease status showed similar risk and CKD stage ≥3 showed a decreased risk of stroke or STE in the AF group (see Table 4 and Table 5). Discussion Our study demonstrates evidence of a reduced risk of readmission for stroke or STE in AFL patients compared to AF patients. On average, AFL patients were younger, more likely to be male, and have a lower mean CHA2DS2-VASc score. However, the AFL cohort had a higher burden of relevant comorbidities such as diabetes, end-stage renal disease, and obesity, as well as tobacco and alcohol use. Yet, after adjustment for these covariates in our final model, the AFL cohort maintained a 20% reduced risk of stroke or STE readmission compared to the AF cohort. While AFL is electrophysiologically distinct from AF, epidemiologic studies describing AFL alone are limited. Data from the Framingham Heart Study on 112 patients with AFL who were matched based on age and sex with AF patients as well as healthy controls revealed that, compared to controls, patients who smoked, had moderate-to-heavy alcohol use, and history of myocardial infarction and/or heart failure were associated with AFL incidence. Compared to AF, AFL patients had less heart valve disease [16]. Our cohort of hospitalized patients with AFL also had significant levels of alcohol and tobacco use disorders, and compared to AF patients, alcohol and tobacco use disorders were more prevalent in AFL patients. While a trend was seen toward more alcohol and tobacco use in AFL within the Framingham Heart Study, it was not significant at the 95% level (smoking: adjusted odds ratio (aOR) = 1.47, 95% confidence interval (CI) = 0.79-2.72; moderate-to-heavy alcohol use: aOR = 1.58, 95% CI = 0.68-3.68) [16]. We were likely able to identify a significant difference between AFL and AF for these disorders due to the greater size of our AFL cohort. Previous studies using animal models have demonstrated how tobacco [17,18] and alcohol [19] are associated with atrial arrhythmia formation, and while specific literature describing the relationship between tobacco and AFL is limited, alcohol has been associated with AFL formation in humans [20,21]. Further studies would be beneficial to examine the specific risks of alcohol, tobacco, and other potential risk factors for AFL, ideally with large enough cohorts to detect modest but significant differences. The findings of our study are an addition to the limited evidence base evaluating the risk of stroke in AFL. Previous studies reported similar rates of stroke and/or STE risk in small groups of AFL patients, without any comparison to an AF patient group. Wood et al. reported an annual stroke risk of 1.6% in their 86 AFL patients referred for radiofrequency ablation with a mean follow-up of 4.5 years [5]. Another study with similar findings was reported by Seidl et al., with an annual risk of approximately 1.8% for thromboembolic events in their 191 AFL patients [4]. A few other studies evaluated the risk of AFL in comparison to a cohort of AF patients. Rahman [23]. However, there was also a significant conversion rate from AFL to AF (66%) across a mean follow-up time of 2.8 years. A common limitation of research into stroke and STE events in AFL is that similar to the studies by Halligan et al. [22] and Al-Kawaz et al. [23], AF eventually develops in many patients who initially present with AFL [24], and efforts to isolate a lone AFL cohort that does not develop AF are difficult, especially considering how AF can present asymptomatically [25]. We attempted to limit this issue within our study by excluding patients who had codes for both AF and AFL at index hospitalization. However, a limitation remains in our study that we cannot fully exclude the possibility that some patients with AFL in our study also have AF, and vice-versa, due to the potential for inaccurate coding within the dataset, thus raising the potential for misattribution bias. Future research will ideally develop methods of ensuring as little crossover as possible between AFL and AF groups to determine the true stroke and STE risk of AFL more definitively compared to AF. For our cohort of AFL, increased age and female sex, as well as hypertension, diabetes, previous stroke, and peripheral vascular disease were found to be significant predictors of stroke and STE readmission (see Table 4); coincidentally, these are many of the same predictors of stroke that exist for AF as well [26]. According to the 2014 AHA/ACC/HRS guidelines on the management of AF, it is a Class I recommendation to manage the risk of stroke similar to AF, as far as using a CHA2DS2-VASc score to guide decision-making regarding anticoagulation [27]. However, this recommendation was made based on expert opinion, without citation of evidence. While our research supports this decision by identifying many of the same risk factors for stroke in AFL as for AF, future research into whether patients with AFL would benefit from different risk factor assessments for stroke compared to AF patients would be beneficial to establish an evidence basis for clinical decision-making. Limitations Our study has several limitations that are common in studies of administrative data. As stated previously, our data are derived from administrative codes that are used for insurance billing rather than clinical purposes. As such, they are subject to misclassification bias from inaccurate entries or missing codes [28]. However, the missing data among the variables we used was less than 2% in total and unlikely to cause significant changes to our results. In addition, we used diagnosis codes for AF, AFL, and all comorbidities that have been validated or used in prior studies [29,30]. Second, data on medication use and adherence were not available within the NRD. Hence, we were unable to assess data on the specific types or doses of anticoagulants used, or if the patients were prescribed any form of rate or rhythm control. Third, because most patients in both AF and AFL groups were more than 65 years old, our findings may not be generalizable to a younger population. Fourth, as the linkage variable "NRD_VISITLINK" does not carry over across multiple years, the maximum amount of data able to be analyzed for this study is one year's worth. Therefore, stroke or STE events after one year will not be captured, and as stroke and STE events can take more than one year to develop, our study may not be able to capture the true rate of stroke and STE events in AF and AFL patients. Finally, not excluding patients who underwent ablation procedures is one of the limitations. Maintaining sinus rhythm reduces the risk of stroke. Hence, atrial flutter is more likely to have a lower stroke risk after an ablation procedure. Further prospective cohort studies with longer follow-up and accessible anticoagulation data are needed to understand further and clarify the stroke risk difference between AF and AFL. Conclusions Our evidence suggests there is a decreased risk of stroke and STE events in AFL compared to AF. For AFL, predictors of stroke and STE are similar to those of AF, such as increased age, previous stroke, hypertension, diabetes, and peripheral vascular disease. Studies with anticoagulation information and longer follow-up periods are needed to verify the study results. Additional Information Disclosures Human subjects: All authors have confirmed that this study did not involve human participants or tissue. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2022-04-07T15:13:48.751Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "61679bd6711c64f0f0a1722dd427563923674a01", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/93057-differences-in-stroke-or-systemic-thromboembolism-readmission-risk-after-hospitalization-for-atrial-fibrillation-and-atrial-flutter.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6b0ac71414585eac97649e9e36dd899d706741dd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
54181403
pes2o/s2orc
v3-fos-license
Association between Ramadan Fasting and cerebrovascular diseases Background and objective: Cerebrovascular diseases are attacks of sudden neurological deficits (motor, sensory or cerebellar). There are a lot of risk factors for stroke like age, diabetes, hypertension, smoking, hyperlipidemia, cardiac diseases and others. This study aimed to show if there is increasing rate of cerebrovascular events during Ramadan in   relation to fasting in our city during the summer season. Methods: This case-control study was carried out in Rizgary Teaching Hospital, Erbil- Iraq from 1st of August to 30th of August 2011. Patients were included in this study if they were middle aged and elderly patients having clinical and radiological features of stroke, another group of in-patients was selected as a control group. Chi square test and logistic regression analyses were used to show the association between stroke and fasting. Results: A sample of 60 patients and 60 control cases were included in this study. Fasting was significant risk factor for stroke in our studied sample, 66.7% of the cases were fasting compared with 40% of the control group (P = 0.03). Hyperlipidemia and history of ischemic heart disease found also to be associated with stroke (P = 0.017 and 0.011, respectively). Logistic regression analysis showed that only fasting and hypercholesterolemia were independent risks factors in causing stroke in our studied sample. Conclusion: In Erbil, where the summer is too hot and the daytime (fasting hours) is long, fasting during Ramadan was found to be an independent risk factor for stroke, and specifically ischemic stroke. Introduction Ramadan fasting is one of the five fundamental Islamic rules.Muslims fast for both eating and drinking and sexual contact from sunrise to sunset for complete one lunar month.Over one billion Muslims fast worldwide during the month of Ramadan.2][3][4] The time of observance differs each year because it follows the lunar calendar, where the time periods corresponding to the month of Ramadan is different in every year according to the solar calendar, since the lunar calendar is 11-12 days shorter than the solar year. 5he fasting period from sunrise to sunset varies with the geographical site and seasons.In the summer months and northern latitudes, the fast can last up to 18 hours or more.At sunset, people usually eat a large meal, and have a rest or pray which in turn requires some efforts.Just before sunrise, people awake from sleep and have another rich meal.Then, some of them may pray and go to sleep again with a full stomach and some continue awake to practice their daily living programs.Fasting has some positive and probably negative effects on certain people; and especially sick people.The obligation that the daily calorie intake be taken in 1 or 2 meals instead of 3 to 5 has an adverse effect on certain people.An early morning walking to the mosque seems to be unwise for heart disease patients. 6Committed busy doctors in Erbil or whole Iraq may have additional obligations including the crowded afternoon clinic which may increase thirst feeling and hunger.The other probable negative effects are that while hungry patients cannot take any drug, such as antihypertensive or antiplatelet drugs, intravenous fluids and the regulation of diabetes mellitus is also negatively affected by an unfamiliar diet type. 5oreover, during hot weather and in warm climates a higher possibility of haemo-concentration should also be considered.Although sick people are exempted from Ramadan fasting, most do not conform to this recommendation because of cultural factors. 2 This study aimed to show the effect of Ramadan fasting on the rate of stroke in our population at Erbil city in the north of Iraq.We have chosen this year because Ramadan fasting occurred during the extremely hot weather (Mid Summer or August 2011) in which temperature in the shadow approached 50 o C and even more at 1:00-4:00 PM.In addition, people should go to work in spite of that, and although people use modern technology for cooling, sometimes that is not enough to comply with the very hot and humid environment.To our knowledge, no previous study of this kind had been carried out in Iraq before, and studies tackling this subject are few, performed mostly in the nearby countries like Egypt, Qatar, Turkey and Iran.radiological features of stroke obtained from the CT scan and MRI examinations.The machine was Siemens Single Slices Somatom Emotion with 4-8 mm slice thickness (Siemens Erlangen, Germany) and The MRI scanner was Siemens 1.5 tesla (Siemens, Erlangen, Germany).Control group was collected from the medical words, Rizgary hospital, at the same period, provided that they have no history of stroke or transient ischemic attack (TIA).A questionnaire was designed by the researchers to collect information about age, sex, known stroke risk factors including Hypertension, Diabetes Mellitus, smoking and Hyperlipidaemia which involves abnormally elevated levels of any or all lipids and/or lipoproteins in the blood and Hypercholesterolemia which means the presence of high levels of cholesterol in the blood.Information were also collected regarding the history of cardiac diseases ( AF, IHD, HF, pulmonary hypertension and others), previous stroke or TIA, drug history, and whether the patient is fasting or not.Data were analyzed using the statistical package for the social sciences (version 19).Chi square test of association was done to compare between proportions among cases and controls.Factors found to be associated with stoke (by Chi square test) were entered into a logistic regression model.A P value of ≤0.05 was considered statistically significant. 1854 A case-control study with a total sample of 120 respondents (60 patients and 60 controls) was carried out in Rizgary Teaching Hospital, Erbil-Kurdistan, Iraq.Data collection was done during Ramadan month of the year 2011 between 1 st of August to 30 th of August.Inclusion criteria for cases were any patient attending the emergency room of Rizgary Teaching Hospital during the mentioned period, complaining from clinical features of either acute weakness, sensory deficits, or features of brainstem deficits and A sample of 60 patients and 60 controls were studied with the female to male ratio of 1.3:1, 1.1:1 and mean ± SD ages of 62.24 ±12.7 ,and 60.25 ±12.667, respectively.Table 1 shows significant differences in the age distribution of cases and controls as the cases were relatively older than the controls.Around half (46.7%) of cases were 70 years old or older (P = 0.037).Table 2 shows that 66.7% of the cases were fasting compared with 40% of the control group (P = 0.003).The same table shows that 71.7% of cases Results were hypertensive compared with 50% of the controls (P = 0.003).Hyperlipidemia including Hypercholesterolemia and history of ischemic heart disease also found to be associated with stroke (P = 0.031, and 0.019 respectively).Logistic regression analysis (Table 3) showed that only fasting and hypercholesterolemia are significantly associated with stroke (OR = 3.8 and 11.2 respectively).The pathological cause of stroke in 93% of our studied sample was ischemia (embolic, thrombotic and lacunar), only 7% was hemorrhagic stroke and 77.4% of the ischemic types was anterior circulation stroke while the posterior circulation stroke represents the remaining 22.6% of patients as shown in Table 4.The study showed a significant association between fasting Ramadan and stroke.More research is needed in this field considering more sample size and different study designs. Table 1 : Age distribution of cases and controls. * By Fisher's exact test ⃰ ⃰ Table 3 : Logistic regression analysis between stroke as a dependent variable and several covariates.
2018-12-01T16:54:02.116Z
2017-12-01T00:00:00.000
{ "year": 2017, "sha1": "d5a0063dd1bc560492aafac25c3ddc67a29dabe1", "oa_license": "CCBYNCSA", "oa_url": "https://zjms.hmu.edu.krd/index.php/zjms/article/download/51/47", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "d5a0063dd1bc560492aafac25c3ddc67a29dabe1", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
255806258
pes2o/s2orc
v3-fos-license
Seedling development traits in Brassicanapus examined by gene expression analysis and association mapping An optimal seedling development of Brassica napus plants leads to a higher yield stability even under suboptimal growing conditions and has therefore a high importance for plant breeders. The objectives of our study were to (i) examine the expression levels of candidate genes in seedling leaves of B. napus and correlate these with seedling development as well as (ii) detect genome regions associated with gene expression levels and seedling development traits in B. napus by genome-wide association mapping. The expression levels of the 15 candidate genes examined in the 509 B. napus inbreds showed an averaged standard deviation of 5.6 across all inbreds and ranged from 3.2 to 8.8. The gene expression differences between the 509 B. napus inbreds were more than adequate for the correlation with phenotypic variation of seedling development. The average of the absolute value correlations of the correlation coefficients of 0.11 were observed with a range from 0.00 to 0.39. The candidate genes GER1, AILP1, PECT, and FBP were strongly correlated with the seedling development traits. In a genome-wide association study, we detected a total of 63 associations between single nucleotide polymorphisms (SNPs) and the seedling development traits and 31 SNP-gene associations for the candidate genes with a P-value < 0.0001. For the projected leaf area traits we identified five different association hot spots on the chromosomes A2, A7, C3, C6, and C7. A total of 99.4% of the adjacent SNPs on the A genome and 93.0% of the adjacent SNPs on the C genome had a distance smaller than the average range of linkage disequilibrium. Therefore, this genome-wide association study is expected to result on average in 14.7% of the possible power. Compared to previous studies in B. napus, the SNP marker density of our study is expected to provide a higher power to detect SNP-trait/-gene associations in the B. napus diversity set. The large number of associations detected for the examined 14 seedling development traits indicated that these are genetically complex inherited. The results of our analyses suggested that the studied genes ribulose 1,5-bisphosphate carboxylase/oxygenase small subunit (RBC) on the chromosomes A4 and C4 and fructose-1,6-bisphosphatase precursor (FBP) on the chromosomes A9 and C8 are cis-regulated. Background Well-developed seedlings lead to a higher yield stability even under suboptimal growing conditions like reduced nutrient input or drought stress [1]. Therefore, variation during early developmental stages of Brassica napus plants is important for selection decisions of plant breeders. Up to now, however, the genetics of seedling development of B. napus had been poorly understood. In comparison to linkage mapping, association mapping studies could achieve a higher mapping resolution due to the fact that in a diversity set linkage disequilibrium (LD) decays faster than in segregating populations used for linkage mapping [2]. Furthermore, association mapping studies benefit from the broader array of genetic diversity represented compared to linkage mapping studies [3,4]. Hasan et al. [5] identified in an association mapping study in B. napus simple sequence repeat (SSR) markers which were physically linked to candidate genes for glucosinolate biosynthesis in Arabidopsis thaliana, to be associated with variation of the seed glucosinolate content in B. napus. For traits, for which less preinformation is available, a high number of markers would be necessary to detect phenotype-marker associations on a genomewide level. The number of SSR markers available in the B. napus genome is expected to be too low for this purpose [6]. Furthermore, the genotyping of such a high number of markers is very expensive. To overcome this problem, Honsdorf et al. [7] tested the association between 684 genome-wide distributed amplified fragment-length polymorphism (AFLP) markers and 14 traits in a set of 84 canola quality winter rapeseed cultivars. They identified between one and 22 putative quantitative trait loci (QTL) which explained between 15 and 53% of the phenotypic variance for ten of the 14 traits. The results of LD analyses suggested, however, that more than 2,000 evenly distributed markers will be required for detecting marker-phenotype associations with a reasonable power in rapeseed [2]. However, it is difficult to obtain a higher number of markers with the AFLP technique in rapeseed [7]. Furthermore, due to the fact that the sequence information of AFLPs can not be easily inferred, their use in marker-assisted selection programs is difficult. Hence, single nucleotide polymorphisms (SNPs) would be the most suitable marker type to cover a complex genome like that of B. napus in the required density for genomewide association studies (GWAS). Therefore, a custom SNP array was used in this study to genotype the entire diversity set. Differential expression of genes during seedling development stage has the potential to be an important reason for phenotypic variation [8,9]. In our study, genes were selected based on a co-expression network analysis. The gene expression of these genes as well as candidate genes from the literature was examined in the entire diversity set and correlated with the phenotypic observations. The objectives of our study were to (i) examine the expression levels of candidate genes in seedling leaves of B.napus and correlate these with seedling development as well as (ii) identify genome regions associated with different gene expression levels and seedling development traits in B. napus. The multiplication of the genotypes was done in a way such that maternal environmental effects were minimized. The genotypes were grown in six replicates, for 30 days in an α-lattice design with 24 blocks of 24 pots in a greenhouse experiment. As described in detail earlier [10], a large number of seedling development traits were assessed to cover a wide range of aspects as well as developmental stages during seedling growth which could be measured with high throughput methods (Table 1). Plant material for weighted gene co-expression network analysis The doubled haploid (DH) winter oilseed rape mapping population ExV8-DH which segregates for multiple seed quality, developmental and performance traits was the basis for the weighted gene co-expression network analysis (WGCNA). Pooled seedling developmental traits from 250 lines of the ExV8-DH population, described previously by Basunanda et al. [11], were measured in replicated greenhouse trials in 2007, and field trials at four locations from 2005-2007 were used to select two groups of 47 ExV8-DH lines with the highest and lowest respective mean performance for developmental and yield-related traits. Digital gene expression analysis For digital gene expression analysis, the 94 pre-selected DH lines, the two parents Express 617 and V8, and their F 1 (Express 617 x V8), were germinated in Jacobsen vessels under controlled conditions in a climate chamber at 20°C for 16 h (day) and 15°C for 8 h (night) with 55% relative humidity. Two experimental replications were performed. At two time points (eight and twelve days after sowing) 100 seedlings from each line were harvested for ribonucleic acid (RNA) extraction within one hour to prevent circadian clock effects during transcriptome analysis. All samples were immediately shock-frozen in liquid nitrogen and stored at -80°C until RNA extraction. Extraction of messenger RNA (mRNA) and digital gene expression sequencing (DGE-seq) was conducted on all as described by Obermeier et al. [12]. WGCNA was performed to identify gene networks correlated to developmental and yield-related traits. Within trait-correlated network modules, hub genes showing the highest interconnectivity to other genes in the module were selected as potential regulatory candidates for reverse transcription quantitative polymerase chain reaction (RT-qPCR) in the diversity set. RNA extraction, cDNA synthesis, and RT-qPCR A total of 100 ng of the leaf apex of the second leaf of each of the 509 genotypes of each of the six replicates was collected after 30 days of growing in the greenhouse trial as explained in detail by Körber et al. [10]. After harvest, the sample was directly frozen in liquid nitrogen. The leaf samples were ground to a fine powder in liquid nitrogen. Total RNA was isolated from the fine powder using Trizol reagent following the manufacturer's protocol (Invitrogen, Karlsruhe, Germany). The total RNA was treated with RNase-free DNase I (Fermentas) (final volume 100 μl) to remove genomic deoxyribonucleic acid (DNA) contamination. RNA concentration was determined using the NanoDrop ND-1000 spectrophotometer (Thermo Fisher Scientific Inc., Waltham, MA, USA). All samples were diluted to an RNA concentration of 100 ng/μl and the samples from the six replicates of each inbred were pooled to equal amounts in order to reduce error variance. First-strand complementary DNA (cDNA) was synthesized from 15 μl of total RNA using Maxima First Strand cDNA Synthesis Kit for RT-qPCR (Invitrogen, Karlsruhe, Germany) following the manufacturer's recommendations. The resulting cDNA was diluted to 25 ng/μl. Gene-specific primers (10 pmol/μl) for 15 candidate genes as well as the control gene Actin (Table 2) were used for the RT-qPCRs performed on the cDNA samples. Amplifications were performed using 5 μl of cDNA, 7 μl of DyNAmo ColorFlash SYBR Green (Biozym), and 1.5 μl of each primer. To minimize pipetting inaccuracy, the pipetting of the cDNA was done using the pipetting robot Biomek FX (Biomek). The following amplification conditions were used for the RT-qPCR on a LightCycler480 (Roche): Preincubation with 95°C for 3 min and amplification with 45 (APL = 55) cycles of 95°C (10 sec), and 60°C (1 min). At the end of each run, a dissociation analysis was performed to confirm the specificity of the reaction. In each 384-well plate used for RT-qPCR reaction, nontemplate controls and cDNA of the two trial standards were included. The RT-qPCR products of each of the 15 genes (eight from WGCNA (see below) and seven from literature) for five inbreds of the diversity set were Sanger sequenced at the Max Planck Genome Center Cologne to confirm the specific amplifications. Genotyping of SNP markers For the GWAS, the 509 B. napus inbred lines were assayed at Agriculture and Agri-Food Canada using a customized Brassica napus 6K Illumina Infinium SNP array (http://aafc-aac.usask.ca/ASSYST/). This array was designed from next generation sequence (NGS) data from Illumina short read (100 bp paired-end) genomic sequence data from seven B. napus cultivars and three B. rapa cultivars, from 3' captured cDNA Roche 454 sequence data from seven B. napus cultivars and four B. oleracea cultivars as well as Illumina short read (80 bp single-end) RNA-Seq data from 42 B. napus cultivars [13]. It contained 5,506 successful bead types representing the same number of potential SNPs. Samples were prepared and assayed as per the Infinium HD Assay Ultra Protocol (Infinium HD Ultra User Guide 11328087_RevB, Illumina, Inc. San Diego, CA). The Brassica 6K BeadChips were imaged using an Illumina HiScan system, and the SNP alleles were called using the Genotyping Module v1.9.4, within the GenomeStudio software suite v2011.1 (Illumina, Inc. San Diego, CA). SNP data were available for 505 inbreds of the diversity set and only SNPs with a percentage of missing data < 30% across all genotypes and a minor allele frequency > 0.05 as well as genotypes with a percentage of missing data < 20% across all SNPs were used for the following statistical analysis. From these 3,910 SNPs, 3,828 could be assigned to a physical map position derived from the reference information of B. rapa [14] and B. oleracea [15]. Statistical analyses Weighted gene co-expression network analysis WGCNA was performed using the WGCNA R package as described by Langfelder and Horvath [16]. Normalized tagcounts (per ten million reads) were obtained for 154,790 probes (86,908 probes mapping to B. rapa and 67,882 probes to B. oleracea reference unigene sequences) using Illumina sequencing of 3'EST digital gene expression tags. Probes were kept if they had a normalized tagcount of at least five in six or more samples. Replicate probes for each unigene were averaged and the 91,048 unigenes present in both datasets were used for the WGCNA consensus analysis. A total of 108 modules were obtained using the automatic network construction function "blockwiseConsensusModules" with the following settings; power = 5, minModuleSize = 50, deepSplit = 2, maxBlockSize = 35000, reassignThreshold = 0, merge-CutHeight = 0.25, minKMEtoJoin = 1, minKMEtoStay = 0. Using the WGCNA function "chooseTopHubInEach-Module", the top hub unigenes were identified from 15 modules which were highly conserved between the two datasets and eight of these top hub unigenes could be amplified as functional candidate genes by RT-qPCR in the 509 rapeseed inbred lines. The network of unigenes with an edge weight of ≥ 0.1 was visualized in Cytoscape [17] and the function of the modules position was determined using Gene Ontology Singular Enrichment Analysis (p < 0.001) [18]. Normalization and differences of gene expression data The C p -value for which the fluorescence rose above the background fluorescence was calculated for each inbredgene combination using the LightCycler 480 Software (Roche; version 1.5). The C p -value, which was designated in the following as gene expression level of the different genes, was normalized to the percentage of the expression level of the housekeeping gene Actin for the corresponding inbred. Associations among inbreds and genes were revealed by a heatmap analysis and grouped with the complete linkage clustering method. Genome positions of the candidate genes A basic local alignment search tool (BLAST) search [19] was performed between the reference sequences of the candidate genes and the reference sequences of B. rapa (v1.2) [14] and B. oleracea (v1) [15]. All positions were used which had a BLAST identity ≥ 85%. Calculation of adjusted entry means The adjusted entry mean M of each genotype-trait/-gene combination, which was the basis for all further analyses, were calculated for the seedling development traits and the gene expression data using different mixed-models. For the former, these were calculated as described in detail by Körber et al. [10]. The calculations for the gene expression data were based on the following model: where y ij was the observation of the ith genotype of the jth technical replication, μ an intercept term, g i the genotypic effect of the ith genotype, t j the effect of the jth technical replicate, and e ij the residual. For calculating the adjusted entry means, g i was regarded as fixed and all other effects as random. Principal component analysis and the assessment of linkage disequilibrium The 509 rapeseed inbreds of our study were assigned to three clusters (MCLUST) using a principal component analysis (PCA) of 89 SSR markers as described by Bus et al. [2]. In order to determine the physical map distance in which LD decays in our B. napus diversity set, r 2 (the square of the correlation of the allele frequencies between all pairs of linked SNP loci) was calculated, where linked loci were defined as loci located on the same chromosome, and plotted against the physical distance in megabase pairs. The overall decay of LD was evaluated by nonlinear regression of r 2 according to Hill and Weir [20]. The percentage of linked loci in significant LD was determined with the significance threshold of the 95% quantile of the r 2 value among unlinked loci pairs, where unlinked loci were defined as loci located on different chromosomes. Pairwise modified Roger's distance (MRD) estimates between all inbreds and the MCLUST groups 1-3 were calculated according to Wright [21]. Genome-wide association analyses The genome-wide association analyses of the seedling development traits and the gene expression data were performed as an single marker analysis using the PK method [22]: where M lm was the adjusted entry mean of the lth inbred carrying allele m, a m the effect of the mth allele, v u the effect of the uth column of the population structure matrix P, g * l the residual genetic effect of the lth entry, and e lm the residual. The first and second principal component calculated based on the 89 SSR markers [2] were used as P matrix. The variance of the random effect g * = g * 1 , ..., g * 509 was assumed to be Var(g * ) = 2Kσ 2 g * , where σ 2 g * was the residual genetic variance. The kinship coefficient K ij between inbreds i and j were calculated based on the above mentioned SSR markers according to: where S ij was the proportion of marker loci with shared variants between inbreds i and j and T the average probability that a variant from one parent of inbred i and a variant from one parent of inbred j are alike in state, given that they are not identical by descent [23]. The optimum T value was calculated according to Stich et al. [22] for each trait. To perform the above outlined association analysis, the R package EMMA [24] was used. We chose the significance threshold of P-value = 0.0001 and the threshold after Bonferroni correction (P-value = 0.05). The association analysis was performed for all inbreds and for each of the three MCLUST groups. For the separate association analyses of the three MCLUST groups, only the kinship matrix K but no P matrix was considered. SNPs which are associated for multiple traits are defined as hot spots for these traits. If not stated differently, all analyses were performed with the statistical software R [25]. Linkage disequilibrium and allele frequency The nonlinear regression trend line of the LD measure r 2 vs. the physical distance intersected the Q 95 of r 2 among unlinked loci pairs (0.145) at 676,992 bp ( Figure 1). The allele frequencies of the 3,828 SNPs of all 509 inbreds ranged from 0.05 to 0.95. Gene expression data The expression levels of the 15 candidate genes examined in the 509 B. napus inbreds showed an averaged standard deviation (SD) of 5.6 across all inbreds and ranged from 3.2 to 8.8. The average MRD (±standard error) of the MCLUST groups 1 to 3 vs. the other two MCLUST groups were 0.32 (±0.01), 0.34 (±0.01), and 0.28 (±0.01), respectively. The consensus WGCNA for the two datasets allocated 83,262 unigenes into 108 modules, where 7,776 unigenes were unassigned. Each module comprised between 53 and 10,285 unigenes. The candidate genes were selected as the top hub genes from 15 modules which were highly conserved between the two datasets, and for eight of them amplification via qRT-PCR was successful ( Figure 2). Seven further candidate genes were selected from main metabolic pathways. Across the examined 15 candidate genes, the gene APL was expressed on average lowest relative to Actin, whereas the gene RBC was expressed highest (Figure 3). The genes APL, UBP15, PECT, GRF1, and SPS were assigned to a cluster of genes which had a lower expression compared to Actin, whereas all the other genes clustered to a group of highly expressed genes. Furthermore, based on the expression levels of the 15 genes, the 509 inbreds were clustered in five different subgroups comprising different germplasm types. The expression levels of the analysed genes differed between the eight germplasm subsets and the three MCLUST groups. Across all 509 inbreds, the expression levels of the genes FBP, SPS, RBC, PK, UBP15, PECT, APL, AILP1, GER1, NOI, GRF1, and GF14 were significantly higher (P-value = 0.05) in the mainly modern winter OSR and spring OSR germplasm types compared to the remaining subsets. In contrast, the expression levels of the genes CEL16 and MyAP showed the opposite trend ( FBP were mostly negatively correlated with the seedling development traits with a correlation coefficient down to -0.39. In contrast, the candidate genes AILP1 and PECT were mostly positively correlated with the seedling development traits with a correlation coefficient up to 0.26 (Additional file 8: Figure S35 Genome-wide association mapping In the GWAS with 3,910 SNPs for all 509 B. napus inbreds, we observed a total of 63 SNP-trait associations with a P-value < 0.0001 for 14 of the 20 seedling development traits. A total of 20.6% of these SNP-trait associations were detected for the A genome and more than half of them were located on the chromosomes A10 and A3. In contrast, 76.2% of the associations were detected for the C genome and most of them were located on the chromosomes C7 and C2. In addition, two SNP-trait associations could not be mapped to the genome of B. napus ( Table 3). The 63 associations explained individually from 3.0 to 4.9% of the phenotypic variance. Furthermore, between one and 21 SNP-trait associations were associated with the same trait and these explained in simultaneous fits between 3.3 and 20.3% of the phenotypic variance (Table 3). For the association analysis of the gene expression levels, we observed across all 509 B. napus inbreds 31 SNP-gene associations for 13 of the 15 examined genes with a P-value < 0.0001. A total of 35.5% of these SNPgene associations were located on the A genome, whereas no clustering across the chromosomes was observed. In contrast, 64.5% were identified for the C genome and 40% of them were located on the chromosomes C2 and C8 ( Figure 5 and Table 3). We identified between one and six SNPs to be associated with the gene expression variation of the individual genes. The identified SNPs explained individually from 3.0 to 13.5% of the phenotypic variance. Furthermore, between two and seven SNP-gene associations were associated with the same gene and these explained in simultaneous fits between 3.6 to 13.7% of the phenotypic variance (Table 3). Across all 509 inbreds, the SNP-FBP association of the gene expression levels was identical with the SNP-ASR association of the seedling development traits on chromosome C7. The SNP-gene associations of MyAP, PK, and SPS and the SNP-trait association of SPD on chromosome C2 as well as the SNP-gene association of PK and the SNP-trait association of H2O on chromosome A3 were also identical for all 509 inbreds. Furthermore, for the MCLUST group 2 the SNP-gene association of PECT on chromosome C6 corresponded with the associations of the projected leaf area hot spot of the seedling development traits. Correspondence of associations across subgroups In the P-value profile from the genome-wide association mapping, several SNP-RBC associations with a Pvalue < 0.0001 were detected on chromosome A4 for all 509 inbreds and the inbreds of the MCLUST groups 1-2 (Figure 4d-f ) as well as on chromosome C4 for all 509 inbreds and the inbreds of the MCLUST group 2 (Figure 4d and f ). Furthermore, mentionable SNP-RBC associations with a P-value < 0.0001 were observed on chromosome C6 for the inbreds of the MCLUST group 1 and on chromosome C2 for the inbreds of the MCLUST group 3 (Figure 4e and g). The SNP-RBC associations detected on chromosome A4 and C4 for all 509 B. napus inbreds and the inbreds of the MCLUST group 2 were in accordance with their physical map position. Furthermore, for the inbreds of the MCLUST group 1 the SNP-RBC association on chromosome A4 was also in accordance with its physical map position, but not on chromosome C4, where the distance in between was~1.3 Mb. In addition, the SNP-FBP associations were in accordance with their mapped genome positions on chromosome A9 and C8 for all inbreds ( Figures 5, 6 and 7). Linkage disequilibrium and SNP density The nonlinear trend line of LD measure r 2 decayed below the significance threshold, the 95% quantile of the r 2 value among unlinked loci pairs, within a distance of 677 kb (Figure 1). Bus et al. [2] estimated based on 89 SSR markers that the pairwise LD decayed within a genetic map distance of approximately 1 cM. This corresponds to about 500 kb [26] and is in good accordance to the value observed in our study. The LD observed by Ecke et al. [27] decayed within 2 cM less fast. The reason for this observation could be that the population studied by Ecke et al. [27] was less diverse than the B. napus diversity set examined in the current study. In our study, 1,755 SNPs mapped to the A genome, whereas 2,073 SNPs mapped to the C genome. Furthermore, 99.4% of the adjacent SNPs on the A genome and 93.0% of the adjacent SNPs on the C genome had a distance smaller than the average range of LD (677 kb). Therefore, this GWAS is expected to result on average in 14.7% of the possible power ( Figure 1). Compared to previous studies in B. napus, the SNP marker density of our study is expected to provide a higher power to detect SNP-trait/-gene associations in the B. napus diversity set. Genome-wide association mapping of seedling development traits Seedling development traits are important targets for breeding because an optimal seedling development leads to a higher yield stability even under suboptimal growing conditions [1]. Up to now, however, little is known about the genetic mechanisms as well as the natural variation of seedling development in B. napus. Thus, we used an association mapping approach to elucidate the genetics of seedling development in B. napus. We observed a total of 63 associations between SNPs and 14 of the 20 seedling development traits with For abbreviations of the traits see Tables 1 and 2. a Chr. is the chromosome of the respective SNP. b P V is the proportion of the explained phenotypic variance. a P-value < 0.0001 ( Figure 5 and Additional file 7: Figure S15d-g -34d-g). Furthermore, for the 14 seedling development traits we found between one and 21 SNPtrait associations for a single trait which explained in a simultaneous fit, on average, 8.5% of the phenotypic variance with a range from 3.3 to 20.3% (Table 3). The large number of associations for these 14 seedling development traits suggests that these are genetically complex inherited. In contrast to the seedling development traits examined in our study, Honsdorf et al. [7] carried out an association analysis of phenological, morphological, and quality traits in 84 canola quality winter rapeseed (Brassica napus) and identified 86 putative QTLs for ten of 14 traits which explained, on average, 36.2% of the phenotypic variance. These differences in the explained phenotypic variance could be due to the fact that Honsdorf et al. [7] analysed agronomic and seed quality traits instead of seedling development traits and that a lower number of genotypes were examined compared to our study. The latter leads to an overestimation of marker effects. This overestimation, however, decreases with a higher number of genotypes in a GWAS [28]. Thus, in a GWAS with 509 inbreds this overestimation is expected to be of minor importance. For the seedling development traits projected leaf area LA08, LA10, LA12, LA14, and LA16, we identified five different hot spots (defined as associated SNPs for multiple traits) on the chromosomes A2, A7, C3, C6, and C7 for all inbreds and/or the MCLUST groups 1 to 3 ( Figures 5, 6, 7 and 8). They explained in a simultaneous fit between 4.3 and 39.9% of the phenotypic variance (Table 3 and Additional file 1: Table S1, Additional file 2: Table S2, Additional file 3: Table S3). Basunanda et al. [11] found in sets of B. napus backcrossed test hybrids a QTL for leaf area of 28 days old seedlings in the middle of chromosome A5 at 53.5 cM which explained 3.0% of the phenotypic variance. Furthermore, Edwards and Weinig [29] measured the leaf area of one young, fully expanded leaf at bolting of 150 B. rapa recombinant inbred lines (RILs) across simulated seasonal settings and detected at cool temperature and short photoperiod conditions a QTL in the middle of chromosome A6 at 58.63 cM which SNPs with their minor allele frequencies are given in the outer circle. The SNPs associated with the candidate gene expression based on the gene expression data are plotted in orange below the allele frequency circle and the seedling development SNP-trait associations in blue outside the allele frequency circle. The size of the letters is related to the proportion of the variance explained by the associations. In the inner circle of the 19 chromosomes, the candidate genes were plotted to their mapping position on the B. rapa and B. oleracea reference genomes. Potential cis-regulated candidate genes were colored red. The A genome is colored blue and the C genome green. explained 7.8% of the phenotypic variance. These discrepancies in the different studies can be explained by dissimilarities in the power to detect QTLs as well as genotype x environment and QTL x environment interactions by examining different genetic material. All three factors have the potential to lead to different QTLs in different studies [30]. The leaf area association hot spots for MCLUST group 2 and 3 on chromosome C6 were separated by ∼ 750 kb. As the average range of LD in the examined diversity set In the inner circle of the 19 chromosomes, the candidate genes were plotted to their mapping position on the B. rapa and B. oleracea reference genomes. Potential cis-regulated candidate genes were colored red. The A genome is colored blue and the C genome green. was with 677 kb close to the separation on the physical map of ∼ 750 kb, no differentiation between linkage or pleiotropy was possible in our study. We identified association hot spots for leaf area on the bottom of chromosome A7 (MCLUST 2) and on the top of chromosome C6 (MCLUST 2 and 3) (Figures 6, 7, 8 and Additional file 7: Figure S15d-g -19d-g). Furthermore, BLAST searches revealed that the two candidate genes GER1 and GF14 are located up-and downstream of these two hot spots, respectively. Thus, these two In the inner circle of the 19 chromosomes, the candidate genes were plotted to their mapping position on the B. rapa and B. oleracea reference genomes. Potential cis-regulated candidate genes were colored red. The A genome is colored blue and the C genome green. genome regions with its flanking candidate genes might be homologous regions. In contrast, the candidate gene sequence of MyAP was mapped by a BLAST search on the top of chromosome A6 and on the bottom of chromosome C6 (Figures 5, 6, 7 and 8). Lydiate et al. [31] and Parkin et al. [32] identified with 399 restriction fragment length polymorphism (RFLP) markers homologous genome regions between the top of chromosome A6 and the top of chromosome C6 in reverse direction as well as between the bottom of chromosome A7 and the bottom of chromosome C6. However, our result implies that these homologous genome regions might be interchanged and that the genome region on the top of chromosome A6 is homologous to the genome region on the bottom of chromosome C6 and that the genome region on the bottom of chromosome A7 is homologous to the genome region on the top of chromosome C6. This finding is in good accordance to the results of Parkin et al. [33] who analysed genome duplications within the B. napus genome with 455 RFLP markers and reported translocated regions which are inverted between A6 and C7 and between A6 and C6. In the inner circle of the 19 chromosomes, the candidate genes were plotted to their mapping position on the B. rapa and B. oleracea reference genomes. Potential cis-regulated candidate genes were colored red. The A genome is colored blue and the C genome green. The MRD between the MCLUST groups 1 to 3 versus the other two MCLUST groups were, on average, 0.32, 0.34, and 0.28, respectively. Furthermore, the phenotypic variation of the examined seedling development traits which was explained by population structure, was on average 30.9% (Table 1). For some traits, the correspondence of the detected associations was low between the three subgroups. The reason for this observation can be a different genetic architecture in the three subgroups. Furthermore, different allele frequencies at the corresponding SNPs in the three different groups and the resulting differences in power to detect the associations can be the reason. It is impossible to decide based on association mapping results on one of the two reasons. This would require further analyses examining a set of bi-parental populations. Nevertheless, in this study we present in addition to the results of the subgroups also the results across all 509 inbreds to benefit from the higher power to detect SNP-trait/-gene associations. Variation of gene expression in seedling leaves In the framework of our study, it was not possible to perform a genome-wide gene expression study with the available budget. Therefore, we selected seven candidate genes from main metabolic pathways to examine their correlation with seedling development traits. Furthermore, the top hub genes from a WGCNA were studied because of their potential role as high level regulators ( Figure 2). We observed a high expression of RBC in the seedling leaves of all 509 B. napus inbreds (Figure 3). This could be explained by the fact that RuBisCO (RBC) is the most abundant protein in plants [34]. The low expression levels of APL which is the predominant large subunit isoform in leaves [35] (Figure 3) were due to the fact that APL (E.C. 2.7.7.27) is involved in starch synthesis [36], and starch in exporting leaves represents only a transient store [37]. The expression levels of the 15 candidate genes examined in the 509 B. napus inbreds showed an averaged standard deviation (SD) of 5.6 across all inbreds and ranged from 3.2 to 8.8. For 16 A. thaliana samples, the SDs for more than 24,000 genes mostly varied between 0.5 and 5 [38]. The considerably higher SD in our study compared to that of Hruz et al. [38] might be due to the fact that our examined candidate genes were selected based on the expected different expression levels in B. napus seedlings. Our observation suggested that the measured gene expression differences between the 509 B. napus inbreds were more than adequate for the correlation with phenotypic variation of seedling development. The candidate genes GER1, AILP1, PECT, and FBP had the highest correlations with the seedling development traits. GER1 might play a role in plant defense, AILP1 is involved in the response to the stimulus of auxin and aluminum ion, PECT is part of the phospholipid synthesis, and FBP is an enzyme that converts fructose-1,6bisphosphate to fructose 6-phosphate in gluconeogenesis and the Calvin cycle and many other metabolic pathways. Thus, these genes have an essential effect on the development of rapeseed seedlings and could have great potential for breeding rapeseed varieties with improved seedling development. Therefore, not only markers associated with seedling development traits could be used for marker-assisted selection in B. napus to improve seedling development but also the expression of genes correlated with seedling development. Genome-wide associations mapping of gene expression correlated with seedling development We mapped the gene expression levels of the 15 candidate genes in our diversity set to identify genome regions contributing to their regulation. These regions could comprise genes or specific regulators of genes influencing seedling development and might be useful for markerassisted selection in B. napus to improve the seedling development. We found across all 509 B. napus inbreds 31 SNPgene associations for 13 of the 15 candidate genes with a P-value < 0.0001. These SNPs associated with the expression of the candidate genes explained in a simultaneous fit on average 6.9% of the phenotypic variance for a single gene (Table 3). This is in accordance with the findings of the seedling development traits which explained in a simultaneous fit, on average, 8.5% of the phenotypic variance for a single trait (Table 3). From this it follows that the expression levels of the candidate genes and the seedling development traits have a similar genetic complexity. The SNP-gene association of PECT on chromosome C6 for the MCLUST group 2 is identical with the association of the projected leaf area hot spot of the seedling development traits. Mizoi et al. [39] observed that pect1-4/pect1-6 F1 Arabidopsis mutants displayed severe dwarfism. PECT is involved as the rate-limiting step in the Kennedy pathway (phospholipid synthesis) [40] and plays a major role in the structure and function of membranes [41]. Thus, the SNP-PECT association on chromosome C6 may caused the differences in leaf area growth of the examined B. napus seedlings. The genome positions of the SNP-gene associations of RBC on the chromosomes A4 and C4 and FBP on the chromosomes A9 and C8 were in accordance with their genome position (Figures 5, 6, 7 and 8, red colored SNPgene associations). According to Chen et al. [42] these associations were defined as cis-regulated, because the SNP-gene associations were within 677 kb upstream or downstream of this gene position mapped by a BLAST search. In the neighborhood of the SNP-gene association of the candidate gene RBC on chromosome A4 at 6,466,112 bp the B. napus genes Bra028181, Bra028174, and Bra028175 were located. Their best BLASTX hits to A. thaliana are the genes AT5G38430 and AT5G38420 which are encoding the RuBisCO small subunit 2B or 1B of A. thaliana, respectively. Furthermore, the SNP-gene association of FBP on chromosome A9 at 27,053,307 bp is nearby the B. napus gene Bra007041 of which the best hit by BLASTX to A. thaliana is the gene AT3G54050 encoding for fructose 1,6-bisphosphate phosphatase. All the other SNP-gene associations were outside this range and therefore defined as trans-regulated ( Figures 5, 6, 7 and 8). Thus, these trans-regulatory SNP-gene associations most likely encode transcriptional regulators which requires further research. Conclusions In this paper we conducted the largest genome-wide association study on seedling development traits in Brassica napus using a diversity set comprising 509 inbreds. A total of 99.4% of the adjacent SNPs on the A genome and 93.0% of the adjacent SNPs on the C genome had a distance smaller than the average range of LD. Therefore, this genome-wide association study is expected to result on average in 14.7% of the possible power. Compared to previous studies in B. napus, the SNP marker density of our study is expected to provide a higher power to detect SNP-trait/-gene associations in the B. napus diversity set. The large number of associations detected for the examined 14 seedling development traits indicated that these are genetically complex inherited. Based on a weighted gene co-expression network analysis in a segregating population, regulatory genes were selected to analyse their gene expression in seedling leaves in the diversity set. The candidate genes GER1, AILP1, PECT, and FBP were strongly correlated with the seedling development traits. Thus, these genes might be interesting targets for breeding and have potential for breeding rapeseed varieties with improved seedling development. For the projected leaf area traits, we identified five different association hot spots on the chromosomes A2, A7, C3, C6, and C7. Further research is required to identify the causative polymorphisms in these association hot spots. Availability of supporting data The data sets supporting the results of this article are included within the article and its additional files (Additional file 4: Table S4, Additional file 5: Table S5).
2023-01-15T15:04:15.526Z
2015-06-09T00:00:00.000
{ "year": 2015, "sha1": "e933b03afc92f8216ef7a1a9207f1008fe0f1583", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12870-015-0496-3", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "e933b03afc92f8216ef7a1a9207f1008fe0f1583", "s2fieldsofstudy": [ "Biology", "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
25279322
pes2o/s2orc
v3-fos-license
A late Paleocene probable metatherian (?deltatheroidan) survivor of the Cretaceous mass extinction Deltatheroidans are primitive metatherian mammals (relatives of marsupials), previously thought to have become extinct during the Cretaceous mass extinction. Here, we report a tiny new deltatheroidan mammal (Gurbanodelta kara gen. et sp. nov.) discovered at the South Gobi locality in China (Xinjiang Province) that is the first Cenozoic record of this clade and renders Deltatheroida a Lazarus taxon (with a new record 10 million years younger than their supposed extinction). The vertebrate fauna associated with Gurbanodelta is most similar to that from the slightly older late Paleocene Subeng locality in Inner Mongolia. The upper molars of Gurbanodelta exhibit a broad stylar shelf with one prominent cusp (stylocone), and a paracone that is sharp and significantly taller than the metacone. The lower molar tentatively assigned to Gurbanodelta has a very small talonid without an entoconid. This combination of these features is known only in deltatheroidans. Phylogenetic analysis places Gurbanodelta as the sister taxon of the North American latest Cretaceous Nanocuris. Gurbanodelta is the smallest-known deltatheroidan, and roughly the same size as the smallest living marsupial. It is likely that the Gurbanodelta lineage dispersed between Asia and North America as part of known intercontinental mammalian dispersals in the late Paleocene, or possibly earlier. talonid (as compared to Gurbanodelta) with well-differentiated hypoconids, hypoconulids and entoconids on the lower molars. Deltatheridium and Deltatheroides are closely related taxa and their morphology is well known 12,13 . In comparison to the smaller Gurbanodelta, both Deltatheridium and Deltatheroides have proportionally broader buccal stylar shelves that occupy more than half of the tooth's width. The stylar shelf in Gurbanodelta occupies about one-third of the tooth's width. The ectoflexus of Deltatheridium and Deltatheroides are very deep, and the depth increases from M1 to M3. The ectoflexus of Gurbanodelta is moderately deep, and it becomes shallower from M2 to M3. The paracone and metacone of Deltatheridium and Deltatheroides closely approach each other, and their bases almost fuse together. Both cusps are slender. The paracone is only slightly taller than the metacone. The bases of the paracone and metacone are not completely fused together in Gurbanodelta, but they approach each other. The paracone and metacone are trenchant in shape, with the former much higher than the latter. The postmetacrista of M2 in Deltatheridium and Deltatheroides is very long and strong. A small notch is present on this crista, making it like a carnassial shearing blade. The postmetacrista of M2 in Gurbanodelta also is very strong, but those teeth lack such a notch. Relative to the preparacrista, the postmetacrista of M3 is reduced in Deltatheroides, and very reduced in Deltatheridium. In Gurbanodelta, the postmetacrista of M3 is as long as the preparacrista. The m1 (known in Deltatheridium but not in Deltatheroides) has a paraconid higher and larger than the metaconid. The hypoconid of the tooth is quite projecting. The m1 of Gurbanodelta has a paraconid lower and smaller than metaconid. Its hypoconid is low and small, and barely projects above the talonid. The recently described Lotheridium is very similar to Deltatheridium and Deltatheroides. The differences present between Gurbanodelta and Deltatheridium (plus Deltatheroides) are the same differences between Gurbanodelta and Lotheridium. Sulestes has very broad stylar shelves on the upper molars, very strong postmetacristae with carnassial notch-like structures on M2 and a reduced metacone and postmetacrista lobe on M3 (similar to Deltatheridium and Deltatheroides but different from Gurbanodelta) 9 . The protocone of Sulestes is large and bears strong paraconule and metaconule. For a deltatheroidan, this morphology is quite unique. The ectoflexus of Sulestes is relatively shallow and broad, and the width of the stylar shelves decreases from M1 to M3. These two features are similar to Gurbanodelta. The m1 of Sulestes has a very large paraconid, larger and taller than the metaconid. A deep carnassial notch is present on the paracristid. Corresponding to the broad protocone, the talonid in Sulestes is also relatively large. The hypoconid and hypoconulid are prominent, and a rudimentary entoconid is always present. These lower molar characters are in a sharp contrast to those of Gurbanodelta. Oklatheridium is a small deltatheroidan, but still larger than Gurbanodelta. The protocone in this species is relatively better developed than in Gurbanodelta. Its trigon basin is broader, and the conules are larger than those of Gurbanodelta. Its preprotocrista is relatively low. It extends to the buccal side past the mesial side of the paracone, but it is not elevated and closely approaches the base of the paracone. In Gurbanodelta, the preprotocrista is elevated and well separated from the paracone. The relatively weak preprotocrista in Oklatheridium likely is coupled with more emphasis on the postvallum/ prevallid shearing than in Gurbanodelta. The postmetacrista of M2 in Oklatheridium is long and bear deep carnassial notches. The ectoflexus of the M2 is much deeper than the state in Gurbanodelta. The M3 of Oklatheridium has a reduced metacone lobe with very short postmetacrista, similar to that in Deltatheridium and Deltatheroides, but different from Gurbanodelta. The lower molar of Oklatheridium has a paraconid slightly larger and higher than the metaconid. Small cuspids e and f are present. The talonid is better developed than in other deltatheroidans, and both the hypoconulid and entoconid are present. These features of Oklatheridium are very different from the m1 of Gurbanodelta. Similar to Gurbanodelta, the slightly larger Atokatheridium has a moderately broad stylar shelf, a mesiodistally compressed protocone, well-separated paracone and metacone, salient buccal expansion of preprotocrista, a shallow ectoflexus and a welldeveloped postmetacrista on M3. In both taxa, the postmetacrista lacks a carnassial notch. This absence may be related to their small body size and a lessened reliance on the postvallum/prevallid shearing. The parastyle and the stylocone in Atokatheridium are not twinned. The parastyle is more lingually positioned relative to the stylocone, and lower than the stylocone. The stylocone itself is quite blunt. The buccal part of the paracrista is low, and weakly connected to the stylocone. In Gurbanodelta, the parastyle and stylocone are twinned cusps. Both are buccally positioned, and are similar in height and size. The stylocone is conical. The paracrista extends buccally and connects the stylocone with a high ridge. The protocone of Atokatheridium is buccolingually broad, and proportionally wider than that of Gurbanodelta. The m1 of Atokatheridium has a paraconid slightly smaller than the metaconid, but the two cusps are similar in height. In Gurbanodelta, the paraconid is much lower than the metaconid. The larger upper molars of Nanocuris have a proportionally longer crown outline than in Gurbanodelta. In Nanocuris, the protocone is relatively smaller. In mesial or distal view, it is significantly lower than the paracone and metacone. In Gurbanodelta, the protocone is slightly lower than the paracone and metacone. The paracone of Nanocuris has a rounded lingual border. In contrast, the lingual side of the paracone of Gurbanodelta forms a blunt ridge. The preparacrista of Nanocuris is relatively weak, and much shorter than the strongly distobuccally expanded postmetacrista. In Gurbanodelta, the two cristae are almost equally developed. The postmetacrista of Nanocuris does not have the postmetacrista cusp and does not form a carnassial notch. This feature is very similar to Gurbanodelta. The stylocone and parastyle in Nanocuris are fused together, forming the only dominant cusp along the stylar shelf border. The lower molar of Nanocuris has a large paraconid that is bigger and higher than the metaconid. A salient carnassial notch is developed between the paraconid and protoconid. The talonid of the lower molar in Nanocuris is relatively larger than that in Gurbanodelta. The talonid has a small hypoconid, hypoconulid and a cingulid-like entoconid, and the small talonid basin is enclosed by these three cusps. A strong cristid obliqua also is present in Nanocuris, and it extends mesially up to the tip of the metaconid. That feature is not present in other deltatheroidans. The larger Tsagandelta is represented by a single a jaw fragment preserving m2 and part of the crown of m3 6 . Tsagandelta has a paraconid that is larger than its metaconid, a sharp carnassial notch on the paracristid and a large mesiobuccal cuspid f. Those features are absent in Gurbanodelta. In addition, the talonid of Tsagandelta is relatively broader than that in Gurbanodelta, and the hypoconid and cristid obliqua are better developed than those in Gurbanodelta. Phylogenetic Analysis We added Gurbanodelta kara to the dataset of Luo et al. (2011 14 to examine the systematic position of Gurbanodelta to Deltatheroida within a broader sample of mammals, and the recent dataset of Rougier et al. (2015) 6 to examine the phylogenetic relationships between Gurbanodelta and other deltatheroidans. The strict consensus of the most parsimonious trees is in S- Figure 1 The m1 (IVPP V 22804) assigned to Gurbanodelta kara has a relatively small paraconid and relatively weakly-developed paracristid. These features are not "typical" for a deltatheroidan. When this lower molar of Gurbanodelta kara is not scored into the data matrix, the phylogenetic relationship between Gurbanodelta and other deltatheroidans remains unchanged (S- Figure 2, 3). S- Figure 1. The strict consensus tree derived from 215 equally-parsimonious trees (2170 steps) resulting from the analysis of the dataset from Luo et al. (2011) 14 . The numbers before the slashes are the Bremer Support values; numbers after the slashes are Relative Supports 15,16 . Internodes without Bremer Support values indicate polytomies.Deltatheroidan taxa are indicated in red.
2018-04-03T03:57:07.641Z
2016-12-07T00:00:00.000
{ "year": 2016, "sha1": "606b111373132dd385ebad6de606b56283956ae5", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep38547.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "606b111373132dd385ebad6de606b56283956ae5", "s2fieldsofstudy": [ "Geography" ], "extfieldsofstudy": [ "Geography", "Medicine" ] }
69422302
pes2o/s2orc
v3-fos-license
The fault diagnosis method of photovoltaic module based on probabilistic neural network This paper describes a fault diagnosis method of photovoltaic (PV) module, which bases on equivalent circuit module and probabilistic neural network (PNN). The output characteristics of the PV module under normal, dust deposition, abnormal aging and partial shading conditions are simulated by using the equivalent circuit model. The simulated data are used as characteristic parameters to fault type diagnosis. The performance of the fault diagnosis model is evaluated, and the results indicate that the method can detect the fault types correctly. Introduction The reliability and stability of photovoltaic (PV) module is the key for PV system performance. The PV module may appear some faults in actual application processing, if not detected soon enough, not only reduce output power but may also cause serious damage and fire hazards. Thus the fault diagnosis is crucial and necessary for PV module maintenance. The accuracy fault diagnosis technology is necessary for improving the performance and reliability of PV systems to achieve higher energy yields. For this propose, several fault diagnosis methods have been studied. Wang et al. proposed an online fault diagnosis method. The fault diagnosis model is based on BP neural network and mathematical model of the PV module. The results indicate that the model is suit for diagnosis various faults, and has high effective and suitability. Due et al. analyzed the output characteristics of PV modules under partial shadow and abnormal conditions, and a fault diagnosis method for PV modules based on the decision tree algorithm is proposed. According to the change of fill factor (FF), the maximum power point voltage (Ump), the maximum power point Current (Imp), open circuit voltage (Ouch) and short circuit current (Sic), the state of PV modules can be diagnosed by the decision tree. The experiment results show that the method is feasibility and effectiveness. Chen et al. presented a fault diagnosis method based on PV module power loss and I-V output curve. Through comparing the simulated and measured output power to determine whether there is power loss in the module, and then used the Ouch and FF to identify the types of the fault. The above theoretical and experimental studies have given some fault diagnosis methods, but the actual situation of the fault is very complex, there are still some difficulties to accuracy determine the types of fault. In fact, due to the complex working environment, the PV module fault presents the complexity and diversity, there are still difficult to accuracy determine the types of fault. Thus, fault diagnosis method needs to be further improvement and development. In this paper, the output characteristics of PV module under different fault types are simulated by using the equivalent circuit model. The simulated results are used to summarize the principles of PV module output characteristic. The fault diagnosis model base on probabilistic neural network (PNN) is established, and the accuracy and reliability of the model is validated. The equivalent circuit model Figure 1 shows the equivalent circuit diagram of solar cells, the mathematical expressions could be express as follows: Where Ipoh is the photocurrent, and the Id is the current which through the D diode, I0 is the diode reverse saturation current, is is the current through the parallel resistance, sic is the short circuit current. The Ouch and U are open circuit voltage the output voltage of PV cell, respectively. The Rash is the parallel resistance, and Rest is the series resistance. The q is the unit charge (1.6×10-19 C), UT is the thermal voltage of diode (25.9 mV) .The Graf and G are the reference solar irradiance at STC (1000 W/m2) and the irradiance at real working condition, respectively. Tre is the temperature of the PV cell at STC (25 °C), and the T is the temperature of working condition, α is the current temperature coefficient. PV module consists of many PV cells wired in parallel to increase current and in series to voltage, according to the formula (6), the equivalent mathematical expression of the PV module could be expressed as: Where, the Ns and Nap are the number of series and parallel solar cell, respectively. The output characteristic of PV module at standard test condition (STC) can be simulated by using the simulation model. Due to the effect of the measuring environment, it is difficult to obtain the output characteristic at standard test condition. Thus, it is necessary to change the data of real test environment into standard test condition. The transformation formula as follows: The fault analysis of PV modules The PV module is directly exposed to work in complex outdoor environment, along with the time increasing, the module unavoidably appears various faults. The common types of fault for PV module are as follows: dust deposition, abnormal aging and partial shading. Abnormal aging The value of series resistance increases significantly when the PV module appears abnormal aging phenomenon, which results the reduction of the maximum power point voltage and the maximum power point current.According to the simulation model of PV model, ignoring the influence of the temperature and external factors on series resistance, the relationship of Rest, Ipoh, Imp and Ump can be expressed as follows: Dust deposition and partial shading The dust deposited on the surface of PV module, which results in reducing of the solar radiation intensity and output power decreases. The simulate result of the dust impact on the output characteristic of PV module at strand test condition are shown in Figure 4. As shown in the Figure, there is a line relationship between short circuit current and solar radiation intensity, and the influence of solar radiation intensity to open circuit voltage is not significantly. Due to decline of the maximum power point current, which cause the maximum output power decrease. The Probabilistic Neural Network The probabilistic neural network is consisted of input layer, hidden layer and output layer, the structure of the neural network is shown in Figure 6. For fault diagnosis of PV module, the input layer is used to receive the output power curves of PV module for fault diagnosis. The number of neurons is same as the length of the input vector. Use x to express an input sample, which dimension is d, so x=(x1, x2, x3, ad). The hidden layer is the fault category. The neurons number of each category has the same number as the input sample, ω is the category, c is the total number of categories, where, ω= (ω1, ω2, ω3, cu). The input layer and the hidden layer are connected by a Gaussian function for obtaining the degree of matching between each neuron in the hidden layer and each neuron in the input layer. The summation layer sums up the matching degree of each class, and then averages the values to get the corresponding problem categories of the input samples. The number of neurons in the summation layer is the same as the hidden layer, and the expression as follows: Where Ni is the number of sample, which category belong to it, σ is the smoothing factor; I is the j central vector of the I type. Results and discussion The mat lab software is used to build the fault diagnosis model,and the equivalent circuit models are established by using mat lab/Simulink to obtained I-V curve, Ouch, Sic, Imp and Ump of PV module under different fault types. The simulate results are used as fault diagnosis parameters for PNN model. The accuracy and practicability of fault diagnosis model are verified by experiment data. Three fault PV modules are selected as the research object. The simulated and measured I-V curves are shown in Figure 7-9, the specific parameters and diagnosis results are shown in Table 1, respectively. According to the figure 7, the simulated and measured curves are fitting well. The fault diagnosis result indicates that two pieces of solar cell of the PV modules are shaded, and the shading ratios are 30%. Figure 7. The measured and simulated curves of PV module under partial shading Figure 8 shows the matching result of simulated and measured curves for PV module, which surface is covered by dust. The simulated curve and measured curve exhibit same variation trend. The simulated result indicates that dust deposition causes the solar radiation, which receives by PV module decreases 5%. From the above results, it can be seen that the fault diagnosis results for the PV module samples are correct. Therefore, it can be proved that the fault diagnosis method, which bases on PNN network and PV module equivalent circuit model can diagnosis the fault types, and can accurately simulate the electrical characteristics of PV modules. Conclusion In this paper, a fault diagnosis method of PV module is proposed. The method bases on equivalent circuit model and probabilistic neural network, the equivalent circuit model of PV modules under different fault types are established, and the corresponding electrical parameters are selected as the fault characteristic parameters for Fault diagnosis analysis. The method has high accuracy and feasibility.
2019-02-19T14:06:28.244Z
2018-07-01T00:00:00.000
{ "year": 2018, "sha1": "4508964e3123d90ddd53fc444cbd11c940512509", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/170/4/042009", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "e990a9183bbe3c717306d4ac1b4cfb49e12bf57e", "s2fieldsofstudy": [ "Environmental Science", "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
225887841
pes2o/s2orc
v3-fos-license
MODERN EQUIPMENT OF LABORATORY ROOMS ON THE SUBJECT OF TECHNOLOGY Given that today almost every specialist and business entity understands that the future development of Uzbekistan and the world economy depends mainly on investment, today the wider attraction of investment in the economy of the republic depends on their effective implementation of economic reforms in our country. It is not difficult to understand that it has become an important basis for supply. Investment, including foreign investment, plays an important role in the social, economic and political development of the country. Introduction It is known that any state cannot develop in isolation from the world without studying the world experience and accepting the achievements of the world's leading countries in the field of science and technology. How crucial is it to attract foreign investment to boost the country's economy, build and reconstruct new enterprises equipped with modern machinery and technology. This will, first of all, solve the most important social problems, such as employment, increase in wages and incomes. Therefore, one of the most important issues is the economic stimulation of enterprises attracting foreign investment to our country and the creation of the necessary conditions. The main categories of the investment process are its essence, the impact of the world market on the investment process, the principles of attracting foreign investment, the main stages, the promotion of foreign investment, information on investment activities, the mechanism of attracting foreign investment and the main directions, insurance of foreign investments, leasing and franchising, determination of economic efficiency of foreign investments. II.Literature review Our country is on the path of transition to a market economy. The importance of investment policy in this direction is very high. Because investments stimulate structural changes in the economy, technical and technological innovations, reconstruction of enterprises, increase the country's export potential. In this regard, the state of Uzbekistan is pursuing its structural investment policy. Structural investment policy consists of regional, sectoral and enterprise investment policies, which are interrelated. Corporate investment policy is a set of measures that allows you to operate effectively, taking into account the interests of the enterprise, population, region and investor. The investment policy of the enterprise, in turn, provides for the development of the enterprise, the export of products, the organization of importsubstituting production, the acquisition of new, modern equipment and technologies. Important strategies will be developed in the conduct of enterprise investment policy. Attracting foreign investment is important in the implementation of these strategies. Political stability in our country, a very favorable investment climate are the basis for the development of long-term investment projects with foreign investors. In addition, conditions are being created for the provision of guarantees for foreign investment and loans, tax and customs benefits, subsidies for loans and interest rates, dozens of legal acts are in force. It is gratifying that many of the products of the chemical industry and technology of our republic, which occupy the type of the world market, are among the benefits of independence. The products of a number of enterprises of the Uzkimyosanoat state joint-stock company are being successfully exported. These enterprises operate in Navoi, Almalyk, Samarkand, Fergana, Kokand, Chirchik, Kungrad and Yangiyul, and the products of chemical and chemical processing bring significant benefits to the state economy and its banking and financial system. III.Analysis During 2009, 690 investment projects were implemented under the investment program and technical modernization programs. 303 of them were successfully completed. A total of 22 large production facilities have been commissioned in the country, including 8 in the oil and gas, chemical and metallurgical industries, 9 in the engineering industry and 5 in the construction industry. From the first days of independence, great attention has been paid to the education of the younger generation and the creation of the necessary conditions for their future. Special attention is paid to the education system. On the basis of international cooperation, in most secondary schools, academic lyceums and professional colleges, laboratory equipment and facilities imported from foreign countries (Korea, Japan, etc.) are used very effectively in educational and scientific activities. In the laboratories of chemical and physical sciences, modern devices and equipment are introduced into the educational process. There are branches of prestigious educational institutions of developed countries in Uzbekistan. Today, 77 higher education institutions and research institutes are working together to make a significant contribution to the development of science. It is no exaggeration to say that every product, item and commodity used in our industry and daily needs is a product of the achievements of chemical science. As a result of further improving the use of investments, the number of chemical production facilities, new, modern equipment and technologies, modern laboratories meeting world standards will increase. Decree of the President of the Republic of Uzbekistan dated January 17, 2019 "On the State Program for the implementation of the Action Strategy for the five priority areas of development of the Republic of Uzbekistan in 2017-2021" in the "Year of Active Investment and Social Development" and November 26, 2019 In order to ensure the implementation of the resolution "On measures for the organization of educational institutions" standards for the necessary equipment and facilities for general secondary education have been developed. In accordance with this principle, in order to widely introduce information and communication technologies in educational institutions, it is planned to provide each classroom with computer equipment and electronic interactive whiteboards for teachers. In particular, it is planned to equip the teachers' room with modern technical equipment. Classrooms for music, drawing, fine arts and painting, mathematics and technology were replenished with additional teaching aids, equipment and musical instruments. There is an opportunity to organize virtual laboratory classes by equipping the laboratories of physics, chemistry and biology with modern information and communication technologies. The library will be equipped with computer equipment, a wi-fi router, barcode and QR code readers, a color printer, fiction and textbooks. It is also planned to equip secondary schools that do not have gyms with sports equipment and gym locker rooms for regular sports. In addition, in order to improve the quality of education, STEAM standards for educational equipment are being established for secondary schools. This standard of equipment will create favorable conditions for teachers and students of secondary schools, improve the quality of education and the widespread use of information and communication technologies in educational institutions. Training and laboratory rooms are organized in special rooms equipped with the necessary equipment. When the classroom is equipped, it should fully cover the content of the subject. There is a board in front of the cabinet, a TV set on the right and a computer on the left. On the left side of the board is a plant or animal cell stand or model, on the right is the evolution of the organic world, on the side of the window are room flowers, on the back are cabinets for biology departments, and on these cabinets are the equipment belonging to each department. ' Abdullaev. Science room equipment should be placed in a separate system that meets the requirements of each biological science separately. IV.Discussion The equipment for the experiment must be at the level of the latest scientific and technical achievements, meet the requirements of technical aesthetics, safety, occupational hygiene. Therefore, there are general requirements for the use of teaching equipment in classrooms and laboratories. 1. Pedagogical requirements: Classrooms and laboratories, their equipment and tools are designed to illuminate the content of the topic studied in the lesson, to help students to fully understand the structure of objects, to help memorize and apply knowledge in practice, in the process of biological education. implementation of the principle of exhibition, as well as the use of advanced pedagogical and information techniques to help students to master the basics of biology, to structure their teaching and practical skills, to prepare them for independent living and career choices should give. 2. Safety and hygiene requirements for the laboratory room: All teaching equipment in the room must meet the requirements of technical means of education and occupational hygiene and safety. The classroom should have reminders (notes) on the rules of use and storage of equipment. Strict observance of safety and hygiene requirements is a reliable guarantee of prevention of accidents and various diseases. 3. Aesthetic requirements: Every piece of equipment placed in the room, as well as their elements and general appearance, must meet the laws of beauty, nurture the artistic taste of students, create a sense of satisfaction in both the student and the teacher. Visual resources in the room: Optical instruments -Biology classes use more optical instruments, such as microscopes and magnifiers. They are used to study the anatomical and morphological features of invisible animals and plants, as well as the structure of microorganisms. Visual aids used in biology classes are divided into natural and visual weapons. Naturally prepared weapons include: botanicals -herbariums, herbarium tables and handouts made from dried plant organs for practical work; from zoology -collections of insects and fixed representatives of invertebrate species, wet preparations showing the development of animals, tulup (chuchela) and skeletons of various systematic groups of vertebrates, handouts -parts of animals, fish bones, coins, bird feathers and others; human anatomy, physiology and hygiene -human skeleton, some bones include micropreparations and others. Visual aids: charts and pictures for each course; human body and model and individual organ systems, which are divided into parts for the course of human anatomy, physiology and hygiene; for a general biology course, monkey skulls and brain models include slides and micropreparations. Tables -Particular attention should be paid to the storage of study schedules. It is convenient to store the tables hanging on the wire hooks in the cabinet. All the equipment of the biology room should be adapted to conduct experiments during the lesson, practical observations, timely presentation of tables, videos, slides, distribution and collection of materials and tools for practical work. Keeping textbooks in a system allows you to quickly find and prepare them for use in the classroom. Proper and beautiful placement of all items in the biology room helps to cultivate aesthetic feelings in students. V.Conclusion The teaching aids that should always be in the classroom include: 1) materials distributed to each student: a sample of various minerals, chemical raw materials, a collection of various minerals, alloys of metals, a sample of rubber, coal and petroleum products, aluminum, steel collection, chemical fiber collection; 2) visual aids: atomic crystal lattice, atomic model, tables representing various chemical production processes; 3) reagents required for the experiment: oxides, acids, salts, indicators, alcohols, aldehydes, aromatic hydrocarbons, organic acids, carbohydrates; 4) experimental equipment: ozonator, electrolyzer for testing solutions, water electrolysis device, voltmeter, limestone kiln, muffle furnace, drying cabinet, distiller, technical scales and other devices. Teacher's workplace lighting with artificial lighting should be at least 300 lk, classroom board 500 lk. Only when the chemistry classroom is fully equipped with them, the lessons will be of high quality. In any laboratory, of course, there is a specialist working as a laboratory assistant. Duties and responsibilities of each laboratory assistant are defined based on the specifics of the institution in which he works.
2020-06-18T09:09:13.621Z
2020-05-30T00:00:00.000
{ "year": 2020, "sha1": "e307da0533e4b89d76713b30a5b72ecab10bd27e", "oa_license": null, "oa_url": "https://doi.org/10.15863/tas.2020.05.85.119", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "002af681ebb6aef443cff99800b28e9f4e1918c1", "s2fieldsofstudy": [ "Engineering", "Economics" ], "extfieldsofstudy": [ "Engineering" ] }
17876357
pes2o/s2orc
v3-fos-license
A Role of Supraspinal Galanin in Behavioural Hyperalgesia in the Rat Introduction In chronic pain disorders, galanin (GAL) is able to either facilitate or inhibit nociception in the spinal cord but the contribution of supraspinal galanin to pain signalling is mostly unknown. The dorsomedial nucleus of the hypothalamus (DMH) is rich in galanin receptors (GALR) and is involved in behavioural hyperalgesia. In this study, we evaluated the contribution of supraspinal GAL to behavioural hyperalgesia in experimental monoarthritis. Methods In Wistar-Han males with a four week kaolin/carrageenan-induced monoarthritis (ARTH), paw-withdrawal latency (PWL) was assessed before and after DMH administration of exogenous GAL, a non-specific GALR antagonist (M40), a specific GALR1 agonist (M617) and a specific GALR2 antagonist (M871). Additionally, the analysis of c-Fos expression after GAL injection in the DMH was used to investigate the potential involvement of brainstem pain control centres. Finally, electrophysiological recordings were performed to evaluate whether pronociceptive On- or antinociceptive Off-like cells in the rostral ventromedial medulla (RVM) relay the effect of GAL. Results Exogenous GAL in the DMH decreased PWL in ARTH and SHAM animals, an effect that was mimicked by a GALR1 agonist (M617). In SHAM animals, an unselective GALR antagonist (M40) increased PWL, while a GALR2 antagonist (M871) decreased PWL. M40 or M871 failed to influence PWL in ARTH animals. Exogenous GAL increased c-Fos expression in the RVM and dorsal raphe nucleus (DRN), with effects being more prominent in SHAM than ARTH animals. Exogenous GAL failed to influence activity of RVM On- or Off-like cells of SHAM and ARTH animals. Conclusions Overall, exogenous GAL in the DMH had a pronociceptive effect that is mediated by GALR1 in healthy and arthritic animals and is associated with alterations of c-Fos expression in RVM and DRN that are serotonergic brainstem nuclei known to be involved in the regulation of pain. Introduction Galanin (GAL) is an injury-responsive peptide that is dramatically upregulated in the dorsal root ganglia and spinal dorsal horn interneurones during inflammation [1] or after nerve injury [2]. In healthy animals, GAL's action on nociceptive processing in the spinal cord is bidirectional, with low concentrations eliciting pronociceptive actions [3] and high concentrations promoting antinociception [4]. Differences in spinal actions of GAL also vary with the differential availability/activation of GAL receptor (GALR) subtypes. GALR1 has an inhibitory action and is more abundant than GALR2 (excitatory) and GALR3 (inhibitory) in the superficial dorsal horn [5]. Despite the considerable number of works evaluating its action in the peripheral nervous system and at the spinal cord level, the role of GAL in pain modulation at the supraspinal level is mostly unknown. In basal conditions several studies showed that, both in humans and rodents, GAL is expressed in the supraoptic nucleus, the paraventricular nucleus of the hypothalamus, the dorsomedial hypothalamic nucleus (DMH), the arcuate nuclei, the lateral hypothalamic area, the locus coeruleus (LC), the amygdala (AMY) and the median raphe nucleus [6], all areas involved in supraspinal pain modulation [7][8][9][10][11]. In relation to receptor expression, GALR1 is greatly expressed in the LC, dorsal raphe nucleus (DRN), the paraventricular nucleus of the hypothalamus, DMH, AMY, thalamus and medulla oblongata [12][13][14][15]. However, in the AMY, GALR2/R3 are also significantly expressed [12]. Similarly, all types of GAL receptors are expressed in the prefrontal cortex and the hippocampus but to a lesser extent [12,14,15]. GALR2 is highly expressed in the hypothalamus, dentate gyrus, piriform cortex and mammillary nuclei [14,15], while the expression of GALR3 has been reported mainly in the hypothalamus (preoptic, DMH, lateral and posterior hypothalamic, ventromedial and premammillary nuclei) [15], the bed nucleus of the stria terminalis, periaqueductal grey matter (PAG), lateral parabrachial nucleus and medial reticular formation [16]. Again, most brain areas mentioned above are involved in the codification and modulation of nociceptive inputs [7,10]. The administration of exogenous GAL to the arcuate [17], tuberomammillary [18], nucleus accumbens [19], central nucleus of the AMY [20,21] and PAG [22] decreases nociception in healthy rats, an effect that is mediated by GalR1 in rodents [23]. A similar effect is observed in some pathological conditions, such as acute inflammation or mononeuropathy [22], where the microinjection of supraspinal exogenous GAL also decreases nociception. Albeit the apparent antinociceptive role of supraspinal GAL in pain modulation, the intracerebroventricular administration of a GALR1 agonist in rats increased c-Fos expression in the DMH [24], an area that facilitates nociception by promoting behavioural hyperalgesia [9,25]. As hyperalgesia is one of the hallmarks of chronic pain, activation of the DMH promotes behavioural hyperalgesia and GAL receptors are strongly expressed in the DMH, here we evaluated the contribution of GAL receptors in the DMH to the descending control of inflammatory hyperalgesia in monoarthritis as well as nociception in healthy controls. Animals, ethical issues and anaesthesia The experiments were performed in adult male Wistar Han rats with 175-250 g (Charles Rivers, Barcelona, Spain). A total of 96 animals (SHAM, n = 48 and ARTH, n = 48) were used in the experiments herein, 40 animals (SHAM, n = 20 and ARTH, n = 20) were used in the behavioural assessment, 32 animals (SHAM, n = 16 and ARTH, n = 16) in the c-Fos protocol and 24 animals (SHAM, n = 12 and ARTH, n = 12) in the electrophysiological evaluation. Animals were randomly assigned two by two to boxes upon arrival; a blue line was painted in the tail of one rat and a red line in the tail of the other. Each box was numbered from 1 to 48, no indication concerning if the animals were assigned to the SHAM or ARTH group was displayed. The list discriminating the boxes corresponding to the SHAM or ARTH groups was kept by an independent party. Each animal was considered a single unit within its experimental group. Animals were housed two per cage, except for animals with chronic intracerebral cannulae implanted that were housed individually. Food and water were available ad libitum and animals were maintained in a climate-controlled room, under 2262uC of temperature, 5565% of humidity and under a 12 h light/dark cycle with lights on at 8:00am. The experimental protocol followed the European Community Council Directive 86/609/ EEC and 2010/63/EU concerning the use of animals for scientific purposes and was approved by the Institutional Ethical Commission (Permit Number: 23248). All efforts were made to minimize animal suffering and to use only the number of animals necessary to produce reliable scientific data. Anaesthesia was induced by administering pentobarbitone (50 mg/kg, i.p., Eutasil, CEVA, Algés, Portugal) and maintained by infusing pentobarbitone (15-20 mg/kg/h, i.p.). The level of anaesthesia was frequently assessed by determining behavioural responses to noxious pinching. Body temperature was maintained within physiological range with the help of a warming blanket (DC Temperature Controller, FHC, Bowdoin, ME, USA). At the end of the experiment, animals received a lethal dose of pentobarbitone. Induction of arthritis The induction of monoarthritis (ARTH) was performed four weeks before the actual experiments, as described in detail elsewhere [9,26]. In order to maintain the researcher blind in relation to whether the animals from a specific box were assigned to the SHAM or ARTH groups, the animals were anaesthetized (section 2.1) by a third party in an adjacent room and then brought to the chirurgical table in groups of two for the injection of SAL or K/C in the right knee joint. Briefly, in anaesthetised animals a mixture of 3% kaolin and 3% carrageenan (K/C, Sigma-Aldrich, St. Louis, MO, USA) dissolved in saline was injected into the synovial cavity of the right knee joint at a volume of 0.1 mL. This model produces mechanical hyperalgesia, which begins a few hours after surgery and extends up to 8 weeks [27]. After the procedure, animals returned to the adjacent room, the anaesthesia was reversed and animals were monitored until fully recovered (eating and grooming). At the end of the induction session all boxes were returned to the animal house. In each animal, development of arthritis was verified again 1 h prior to each behavioural session. While confirming the arthritic status of the animals, through the flexion and extension of the right leg, the experimenter was handed the animals by a third party without any specific order and without prior knowledge of the box number. Only those rats that vocalized every time after five flexion-extension movements of the knee joint were considered to have arthritis, and they were included in the ARTH group. SHAM animals were injected with 0.1 mL saline in the synovial cavity of the right knee joint. SHAM animals did not vocalize to any of the five consecutive flexionextension movements of the knee joint. After the test, the animals were returned to their home cages by a person other than the evaluator. Behavioural assessment of nociception All behavioural tests were performed during the day time, starting at 9:30am and ending at 1:30pm after which the animals were returned to the animal house. 3.1 Mechanical hyperalgesia. The application of noxious pressure to the primary site of injury is a classical approach to measure mechanical hyperalgesia [28], both in humans and animals [29]. Here, the pressure application measurement (PAM; Ugo Basile, Comerio, Italy) method was used. It allows an accurate behavioural measurement of mechanical hypersensitivity in rodents with chronic inflammatory joint pain [30] by the application of a force range of 0-1500 g. To perform the test and with the animal securely held, the force transducer unit (fitted to the experimenter's thumb) is placed on one side of the animal's knee joint and the forefinger on the other and an increasingly force is applied across the joint until a behavioural response is observed (limb-withdrawal, freezing of whisker movement, wriggling or vocalization) with a cut-off of 5 s. The peak force applied immediately prior to the behavioural response is recorded as the response threshold (RT). RT was measured twice in the ipsilateral and contralateral limbs at 1 min intervals. The mean RTs were calculated per animal. At the end of the session animals were returned to their home cage. 3.2 Thermal hyperalgesia (heat). Heat hyperalgesia was evaluated using the Hargreaves test [31]. The rats were habituated to the experimental conditions by allowing them to spend 1-2 h daily in the experimental room for the three days preceding any behavioural tests [9]. For assessing heat hyperalgesia, a radiant heat source was placed under the hindpaws in awake animals and the time spent between the heat application and the withdrawal response (Plantar Test Instrument, Model 37370, Ugo Basile, Varese, Italy) was registered as the paw-withdrawal latency (PWL). In each session, the PWL was assessed prior to drug administration in the DMH and 20 min after. In each time point, the PWL was repeated twice at an interval of 1 min and the mean of these values was used in further calculations. Cut-off time was 15 s. Procedures for intra-DMH microinjections Before the placement of the guide cannulas the animals were anaesthetized (section 2.1) by a third party in an adjacent room and then brought to the chirurgical table one at the time. For intra-DMH drug administration, four weeks before the actual experiments (at the same time that arthritis was induced), animals were anaesthetised and placed in a stereotaxic frame, and one stainless steel guide cannula (26 gauge; PlasticsOne, Roanoke, VA, USA) was then implanted in the DMH according to the coordinates of the atlas by Paxinos and Watson [32]. The tip of the guide cannula was positioned 1 mm above the desired injection site in the DMH [AP, 23.24 mm from bregma; LM, 0.4 mm lateral from the midline (right side); DV, 7.5 mm below the surface of the skull]. The guide cannula was kept in place through the use of two dental screws and dental cement. A dummy cannula was inserted into the guide cannula to close the top. After the procedure, the anaesthesia was reversed and animals were monitored until being fully recovered (eating and grooming) in the adjacent room and returned to the animal house. In order for the experimenter to remain blinded in relation to which animals were SHAMs or ARTHs, prior to the beginning of the behavioural session, the cards displaying the number of the box were substituted by cards displaying letters. Test drugs were administered in the DMH through a 33-gauge injection cannula (PlasticsOne) inserted into and protruding 1 mm beyond the tip of the guide cannula. The microinjection was made using a 10.0-mL-Hamilton syringe connected to the injection cannula by a polyethylene catheter (PE-10; Plastics One). The injection volume was 0.5 mL and therefore, the spread of the injected drugs within the brain was expected to be 1 mm [33]. The efficacy of injection was monitored by observing the movement of a small air bubble through the tubing. The injection lasted 20 s and the injection cannula was left in place for additional 30 s to minimize the return of drug solution back to the injection cannula. Brain injection sites were histologically verified from post-mortem sections and plotted on standardized sections from the stereotaxic atlas [32] (Fig. 1). After the completion of the tests and animals were returned to the animal house, the cards were switch again. The attribution of the letter cards was recorded in a lab book separate from the one used to register the results. The order of attribution of the letter cards was random and changed in each experimental session. The order of the administration of the drugs to each animal was defined at the beginning of the experiment to avoid potential confounding effects related to this parameter. The results of the tests were only associated with the respective animal after the end of the experiment. Drugs Solutions for drug administration in the DMH were prepared in sterilized saline 0.9% (Unither, Amiens, France; pH 7.2). All the experimental drugs used in this work were acquired from Tocris (Bristol, UK). Each injection had a volume of 0.5 mL and contained either GAL (1.0 nmol), a non-specific GAL receptor antagonist (M40, 1.0 nmol), a specific GALR1 agonist (M617, 1.0 nmol) or a specific GALR2 antagonist (M871, 1.0 nmol) [17,20,23]. Control injections were performed with SAL in order to avoid any confounding effect that might result from injecting the liquid itself. Course of the pharmacological study Four weeks following induction of arthritis and insertion of the guide cannula for DMH injections, the efficacy of DMH-induced phasic and tonic modulation of nociception was determined by assessing the effect of DMH injection of exogenous GAL, M40, M671 and M871 upon the PWL in awake SHAM and ARTH animals. SAL was used in control injections. The latency of the withdrawal response was assessed 20 min [18,34] following the intra-DMH injections. The interval between behavioural assessments of different drug treatment conditions in the same animal was at least two days. The order of testing different drugs varied between the animals. Recording of neuronal activity in nociceptive RVM cells For the electrophysiological study, animals were removed from the animal house in a random order, one per day, already anaesthetized, by a person other than the experimenter. Anaesthesia (section 2.1) was administered at 9:30am, the electrophysiological recordings started between 10am and 10:30 am and lasted for 3 h. The order of the administration of the drugs varied between the animals. The electrophysiological recordings of the activity of RVM neurones followed a protocol described in Pinto-Ribeiro and colleagues [9]. In anaesthetised animals, a recording electrode was placed in the RVM (AP: 5.88 mm rostral to the interaural line, ML: 20.6 to 0.6 mm lateral from the midline, and DV: 10.0 mm below the surface of the skull) [32]. Single neurone activity was recorded extracellularly with tungsten electrodes (tip impedance 3-10 MV at 1 kHz), the signal was amplified and filtered and data sampling was performed through a CED Micro 1401 interface and Spike 2 software (Cambridge Electronic Design, Cambridge, UK). Recording of RVM neurones was started after the animal was under light anaesthesia; i.e., the animals gave a brief withdrawal response to noxious pinch, but the pinch did not produce any longer lasting motor activity, nor did the animals have spontaneous limb movements. RVM neurones were classified based on their response to noxious heating of the tail with a tail-flick device (Ugo Basile). Heat stimulation of the tail was applied during 10 s. Functional classification of RVM neurones followed the scheme developed earlier by Fields and colleagues [35] and by Fields and Heinricher [36]. The neurones whose firing activity increased during heat stimulation of the tail were considered On-cells, those decreasing its activity were classified as Off-cells and finally, cells displaying only a negligible (,10%) or no alteration in discharge rates during noxious stimulation were considered Neutral-cells and were not analysed in this study. However, a significant difference with the classification scheme of Fields [35] is that in the present study the noxious stimulus-induced withdrawal reflex was not taken into account in the classification. Therefore, as in previous studies, RVM cells are here called On-like and Off-like cells [9,37,38] rather than On-or Off-cells. The characterization of the response properties of RVM cells consisted of the following assessments performed successively: (i) spontaneous activity; (ii) response to heating of the tail; (iii) recovery to the spontaneous activity level. It should be noted that when analysing responses of RVM neurones to peripheral stimulation, the baseline discharge frequency (recorded just before the stimulation) was subtracted from the discharge frequency assessed during the stimulation using the following formula: During the recordings, animals also had a guide cannula implanted for drug administration into the DMH. After determining the baseline spontaneous activity of RVM cells and their baseline noxious-evoked responses to peripheral stimulation, either exogenous GAL or a non-specific GALRs antagonist (M40) were microinjected in the DMH, in order to assess its phasic or tonic effect, respectively, upon the discharge rate of RVM neurones. All results from drug administrations were plotted for the variation in activity comparing baseline (before drug administration) and values obtained 20 min after the injection into the DMH. The results of the electrophysiological analysis were only associated with the respective animal after all recordings were performed. (ARTH) or SAL (SHAM) in the right knee of animals. In RVM recordings, the response properties of nociceptive neurones were assessed by determining their spontaneous activity and the response to noxious heating of the tail. Search for the next neurone to be studied started about 30 min after testing of the previous one was completed. At the end of the recording session, electrolytic lesions were made in the recording sites, the animals were given a lethal dose of pentobarbitone and the brains were removed for histological verification of the recording and injection sites. c-Fos study For the c-Fos induction protocol, animals were removed from the animal house in a random order, one per day, already anaesthetized, by a person other than the experimenter. Anaesthesia (section 2.1) was administered at 9:30 am; the protocol started between 10am and 10:30am and lasted for 2 h. To evaluate changes in brain activation after exogenous GAL administration in the DMH and/or peripheral noxious stimulation in SHAM and ARTH animals, c-Fos immunoreaction was performed following the protocol described elsewhere [39]. Animals were held in a stereotaxic frame. For drug administration, a guide cannula was placed in the DMH according to the coordinates of the atlas by Paxinos and Watson [32] and one of the following protocols was performed: (i) SAL microinjection in the DMH of SHAM animals; (ii) exogenous GAL microinjection in the DMH of SHAM animals; (iii) SAL microinjection in the DMH and extension of right limb of SHAM animals; (iv) exogenous GAL microinjection in the DMH and extension of right limb of SHAM animals; (v) SAL microinjection in the DMH of ARTH animals; (vi) exogenous GAL microinjection in the DMH of SHAM animals; (vii) extension of right limb of ARTH animals; (viii) exogenous GAL microinjection in the DMH and extension of right limb of ARTH animals. Two exogenous GAL (or SAL) doses were injected in the DMH with a 15 min interval (Fig. 2). Extension of the paw was performed 5 times every 2 minutes for 2 h. Two hours after the first injection and first knee extension (beginning of the protocol), the animals were transcardially perfused with 4% paraformaldehyde in 0.1 M phosphate buffer saline (PBS, pH = 7.4), brains were removed and then post-fixed overnight in the same fixative and kept in a solution of 8% sucrose in PBS. One in three coronal vibratome (Leica, Carnaxide, Portugal) sections (50 mm thick) were treated with a solution of 3.3% H2O2 in PBS (30 min) to inhibit endogenous peroxidase activity, and then sequentially washed thrice (10 min) in PBS and PBS-Triton (PBS-T; 0.3% triton X-100; Sigma-Aldrich, Sintra, In all experiments, animals were divided in two groups, control (SHAM) when injected with saline and arthritic (ARTH) when injected with a mixture of kaolin and carrageenan in the synovial capsule of the right knee joint. Three days after the intrasynovial injection, arthritis was confirmed by performing five consecutive movements of flexion/extension of the knee (dashed line). Animals in the ARTH group developed a clear swelling of the treated knee joint and all gave a vocalization response during a minor extension and flexion of the affected limb by the experimenter. SHAM animals displayed no obvious swelling of the knee joint and did not vocalize when the limb was flexed. Four weeks after the induction of monoarthritis animals were tested in three independent experiments. In experiment 1, the Hargreaves test was used to study the effect of exogenous galanin (GAL), a non-specific GAL receptor antagonist (M40), a specific GAL receptor-1 agonist (M617) and a specific GAL receptor-2 antagonist (M871) in the dorsomedial nucleus of the hypothalamus (DMH) upon paw-withdrawal latency (PWL) (n = 20 per experimental group). In each animal, the development of arthritis was confirmed again 1 h prior to each behavioural session by performing five consecutive movements of flexion/extension of the knee. During the experimental sessions, PWL was assessed before and 20 min after the administration of the drugs to the DMH. In experiment 2, two days prior the c-Fos study, the pressure application measurement (PAM) test was performed to confirm the arthritic state of the animals. c-Fos expression was evaluated in SHAM and ARTH animals after exogenous GAL or saline (SAL) administration in the DMH, peripheral noxious mechanical stimulation and the simultaneous application of noxious mechanical stimulation after the microinjection of exogenous GAL in the DMH (n = 16 per experimental group). Peripheral stimulation was applied each 2 minutes during 2 hours and two drug injections were made in the DMH, one at the beginning and another 15 minutes after the beginning of peripheral stimulation. Neurones expressing c-Fos were quantified bilaterally in the ventrolateral periaqueductal grey matter (vlPAG), locus coeruleus (LC), dorsal raphe nucleus (DRN) and rostral ventromedial medulla (RVM). In experiment 3, RVM neurones were recorded before and after the administration of exogenous GAL and M40 in the DMH. The assessment of neuronal activity includes a preliminary evaluation of spontaneous and noxious-evoked activity followed by the recording of these parameters 20 min after drug administration to the DMH (n = 12 animals per experimental group). Statistics For the effect of drugs upon PWL, the minimum number of animals needed was determine à priori using the G power software (version 3.1.9.2, University of Kiel, Germany) considering a ANOVA-2-way test, a err probability of 0.05, power of 0.95 and an effect size of 0.80 was n = 23. For the effect of drugs upon RVM neuronal activity, the minimum number of animals needed was determine à priori using the G power software considering a ANOVA-2-way test, a err probability of 0.05, power of 0.95 and an effect size of 0.80 was n = 28. For the effect of drugs upon c-Fos expression, the minimum number of animals needed was determine à priori using the G power software considering a ANOVA-2-way test, a err probability of 0.05, power of 0.95 and an effect size of 0.80 was n = 32. The results of the RT analysis correspond to the mean 6 SD of raw data; no method of data normalization was used. To assess the effect of the drugs upon PWL for each behavioural session, the value of the basal withdrawal latency (withdrawal latency prior to drug administration) was subtracted from the value of the withdrawal latency at the peak effect of the drug, a negative value indicated the withdrawal latency decreased while a positive value corresponded to an increase in withdrawal latency after drug administration to the DMH. To perform this evaluation, raw data was used. To assess the effect of the drugs upon spontaneous neuronal activity, the value of the activity of RVM On-and Off-like cells without noxious peripheral stimulation prior to the administration of drugs in the DMH was subtracted from the activity of these cells without noxious peripheral stimulation at the peak effect of the drug. Similarly, to assess the effect of the drugs upon the noxious-evoked neuronal activity the value of the activity of RVM On-and Offlike cells during noxious peripheral stimulation prior to the administration of drugs in the DMH was subtracted from the activity of these cells during noxious peripheral stimulation at the peak effect of the drug. Only raw data was used in this analysis. To compare the level of c-fos in each area, the total number of cells stained was registered per area studied and only raw data was used in this analysis. The GraphPad Prism 6 software (GraphPad Software Inc, La Jolla, CA, USA) was used to perform the statistical analysis. The comparison of differences between RT in the PAM test and between the baseline of RVM neuronal spontaneous and heat-evoked activities of SHAM and ARTH animals were performed using a student's t-test for unpaired data. To compare differences in RT between the ipsilateral and the contralateral side in SHAM and ARTH animals a student's t-test for paired data was used. All other comparisons between groups were performed using a two-way ANOVA followed by a Bonferroni correction for multiple comparisons post-hoc test. Statistical significance was accepted for P,0.05. Monoarthritic animals developed ipsilateral mechanical allodynia Three days after the intrasynovial injection, all animals in the ARTH group developed a clear swelling of the treated knee joint and all gave a vocalization response during a minor extension and flexion of the affected limb by the experimenter. SHAM animals displayed no obvious swelling of the knee joint and did not vocalize when the limb was flexed. Mechanical hyperalgesia in the knee joint was assessed by determining RT to mechanical pressure over the knee joint. No differences were found between the RT of the ipsilateral and contralateral hindpaws in SHAM animals (t 7 = 1.535, P = 0.169) while in ARTH animals the ipsilateral RT was significantly lower than the contralateral (t 7 = 3.377, P = 0.0118). No differences were found between the contralateral RT of SHAM and ARTH animals (t 14 = 0.000, P.0.999). Four weeks after induction of monoarthritis, RT was significantly different between SHAM and ARTH animals (t 14 = 2.883, P = 0.012). This result indicates that K/C induced a significant RT decrease, i.e., mechanical hyperalgesia (Fig. 3). Exogenous GAL in the DMH decreases paw-withdrawal latency, an effect reversed by the administration of a GAL receptors antagonist To investigate a possible role of supraspinal GAL in phasic and tonic pain facilitation in SHAM and ARTH animals, paw withdrawal latencies (PWL) were assessed after exogenous GAL or M40 microinjection, respectively, in the DMH. The PWL of SHAM and ARTH animals 20 min after exogenous GAL microinjection in the DMH was significantly decreased when compared with SAL injection (main effect of the drug: F 1,76 = 61.880, P,0.001). The exogenous GAL-induced decrease in PWL was of the same magnitude in the SHAM and ARTH groups (main effect of the group: F 1,76 = 2.704, P = 0.104). Post hoc tests confirmed that the PWL of SHAM and ARTH animals treated with exogenous GAL was significantly lower than the PWL of SHAM and ARTH animals treated with SAL (Fig. 4A). Non-specific inhibition of GAL receptors induced by administration of M40 in the DMH significantly altered the PWL when compared with SAL injection (main effect of the drug: F 1,76 = 13.830, P,0.001). The effect of M40 was significantly different between SHAM and ARTH animals (main effect of the group: F 1,76 = 10.070, P = 0.002). The M40-induced effect on PWL varied with the experimental group (interaction between group and drug: F 1,76 = 8.048, P = 0.006). Post hoc tests indicated that M40 significantly increased PWL in SHAM animals, but did not alter PWL in ARTH animals (Fig. 4B). Nociceptive facilitation after exogenous GAL in the DMH is mediated by GAL receptors type-1 To determine which GAL receptor is involved in pain facilitation induced by exogenous GAL in the DMH, PWL was assessed after the administration of M617 (a specific agonist of GAL receptor type-1 -GalR1) and M871 (a specific antagonist of GAL receptor type-2 -GalR2) into the DMH. Twenty minutes after microinjecting M617 in the DMH, PWL was significantly decreased when compared with SAL injection (main effect of the drug: F 1,76 = 39.530, P,0.001). The effect of M617 was not different between SHAM and ARTH groups (main effect of the group: F 1,76 = 0.357, P = 0.552). Post hoc tests confirmed that PWL significantly decreased after M617 administration in the DMH both in SHAM and ARTH animals when compared to PWL after SAL administration (Fig. 4C). Administration of M871 in the DMH had a significant effect on PWL (main effect of the drug: F 1,76 = 29.820, P,0.001), and the effect of M871 varied with the experimental group (interaction group x drug: F 1,76 = 5.089, P = 0.027). Post hoc tests showed that M871 significantly decreased the PWL in SHAM animals but did not alter significantly the PWL of ARTH animals (Fig. 4D). Expression of c-Fos in brainstem areas involved in pain control is altered by exogenous GAL in the DMH Descending pain modulatory drive from the forebrain to the spinal cord may be relayed by multiple areas in the brainstem. To determine which brainstem areas mediate exogenous GAL-driven descending pain modulatory effects originating in the DMH, c-Fos expression was investigated in caudal brain areas that not only expressed GAL and/or its receptors but that are also involved in the descending modulation of nociception. Hence, we compared changes in c-Fos expression in the ventrolateral periaqueductal grey matter (VLPAG), the LC, the dorsal raphe nucleus (DRN) and the rostral ventromedial medulla (RVM) between SHAM and ARTH animals after Post-hoc testing showed that exogenous GAL in the DMH significantly increased c-Fos expression when compared to SALinjected animals in both experimental groups, although a higher expression was observed in ARTH animals (Fig. 5A). The flexionextension protocol (SAL+STI) increased c-Fos expression in SHAM animals when compared to SAL and GAL administration while it decreased its expression in ARTH animals when compared to GAL-Injected ARTH. The simultaneous infusion of GAL in the DMH and flexion-extension of the injected limb decreased c-Fos expression in SHAM when compared with its expression after the flexion-extension protocol and in ARTH when compared with GAL-injected ARTH (Fig. 5A). Similarly, in the ipsilateral RVM, the number of cells activated was significantly different depending on the protocols (main effect of the protocol: F 3,24 = 70.240, P,0.001), an effect that varied with the experimental group (interaction group x protocol: F 3,24 = 20.240, P,0.001). Post-hoc testing showed that GAL in the DMH increased the number of c-Fos expressing cells in both experimental groups when compared to SAL-injected animals, while its expression was different between SHAM and ARTH animals after repeated flexion-extension of the injected limb, with increased c-Fos expression in SHAM animals alone when compared with SAL and GAL-injected SHAM (Fig. 5B). The simultaneous injection of GAL in the DMH and flexion-extension of the injected limb significantly decreased the number of c-Fos positive cells in SHAM when compared to the flexion-extension protocol and in ARTH animals when compared with GAL administration and the flexion-extension protocols (Fig. 5B). The number of c-Fos expressing neurones in the contralateral LC did not vary with the stimulation protocols (main effect of the protocol: F 3,24 = 0.413, P = 0.745) although it was significantly different between experimental groups (main effect of the group: F 3,24 = 16.410, P,0.001). Post-hoc testing did not show a specific alteration between each stimulation protocol (Fig. 5C). In the ipsilateral LC, the number of c-Fos positive cells varied with the stimulation protocol (main effect of the protocol: F 3,24 = 7.462, P = 0.001), an effect that depended on the experimental group (interaction group x protocol: F 3,24 = 14.310, P,0.001). Post-hoc testing showed an increase in the number of c-Fos expressing neurones after the simultaneous infusion of GAL in the DMH and flexion-extension of the injected limb in ARTH animals when compared with the same protocol in SHAM and with the SAL/ GAL/flexion-extension protocols in ARTH (Fig. 5D). In the contralateral vlPAG, the number of c-Fos expressing cells varied with the stimulation protocol (main effect of the protocol: F 3,24 = 19.200, P,0.001), an effect that depended on the experimental group (interaction group x protocol: F 3,24 = 40.030, P,0.001). Post-hoc testing showed a significant increase in the number of c-Fos positive cells in ARTH animals when compared to SHAM and after exogenous GAL in the DMH of SHAM animals when compared to SAL injected SHAM (Fig. 5E). In addition, in ARTH animals the number of c-fos expressing neurones was significantly lower in all protocols when compared to SAL injected ARTH animals (Fig. 5E). Similarly, the number of cells activated in the ipsilateral vlPAG was different after the stimulation protocols (main effect of the protocol: F 3,24 = 22.920, P,0.001) and depended on the experimental group (interaction group x protocol: F 3,24 = 47.100, P,0.001). Post-hoc testing (Fig. 5F) showed increased c-Fos expression in SAL-injected ARTH animals when compared with SAL-injected SHAM. GAL in the DMH significantly increased c-Fos expression in SHAM animals while it significantly decreased its expression in ARTH animals. c-Fos expression after the flexion-extension of the injected limb was not significantly different when compared to SALinjected SHAM although it was significantly decreased when compared to GAL-injected SHAM (Fig. 5F). This protocol also significantly decreased c-Fos expression in ARTH animals when compared to SAL-injected ARTH although its expression was significantly higher when compared to GAL-injected ARTH. The simultaneous infusion of GAL in the DMH and flexion-extension of the injected limb did not significantly alter c-Fos expression when compared to SAL-injected SHAM but was significantly decreased when compared to its expression after GAL injection in the DMH. In ARTH animals, c-Fos expression was significantly decreased after the simultaneous infusion of GAL in the DMH and flexion-extension of the injected limb when compared to SALinjected ARTH and the flexion-extension protocol (Fig. 5F). Finally, In the DRN, the number of c-Fos positive cells varied with the stimulation protocol (main effect of the protocol: F 3,24 = 24.690, P,0.001) and this effect was dependent of the experimental group (interaction group x protocol: F 3,24 = 14.140, P,0.001). Post-hoc testing showed an increased DRN activation after GAL in the DMH in both experimental groups when compared to SAL-injected animals (Fig. 5G). The flexionextension of the injured limb significantly increased c-Fos expression in SHAM animals when compared to ARTH animals and when compared with SAL-and GAL-injected SHAM. The simultaneous infusion of GAL in the DMH and flexion-extension of the injected limb increased c-Fos expression in SHAM animals when compared to ARTH and to its expression after SAL, but decreased when compared to the flexion-extension protocol. Additionally, it decreased c-Fos expression in ARTH animals when compared to GAL-injected ARTH (Fig. 5G). The activity of pain modulatory On-or Off-like cells in the RVM is not altered by exogenous GAL in the DMH To evaluate the effect of exogenous GAL administration in the DMH upon the activity of RVM neurones, the spontaneous and heat-evoked activities of presumably pronociceptive RVM On-like cells and antinociceptive RVM Off-like cells were recorded in SHAM and ARTH animals before and after the administration of exogenous GAL, M40 or SAL. Before drug administration, the spontaneous activity of RVM On-like cells was significantly decreased in ARTH animals when compared to SHAM animals ( Table 1). The magnitude of the response evoked by noxious heating of the tail in RVM On-like cells was not different between SHAM and ARTH animals. In RVM Off-like cells, the spontaneous activity before drug treatments was significantly decreased in ARTH animals when compared to SHAM animals. Similarly, the magnitude of the heat-evoked response in RVM Off-like cells of ARTH animals was significantly lower when compared to that in SHAM animals ( Table 1). Microinjection of drugs into the DMH did not alter the spontaneous activity of RVM On-like cells (main effect of the drug: F 2,98 = 0.262, P = 0.770). Overall, after drug injection, On-like cell spontaneous activity was different between ARTH and SHAM animals (main effect of the group: F 1,98 = 6.510, P = 0.012) (Fig. 6A), although post-hoc tests failed to show a significant difference between experimental groups at a specific time point. The administration of drugs to the DMH did not alter the spontaneous activity of RVM Off-like cells (main effect of the drug: F 2,81 = 0.616, P = 0.543) and the spontaneous activity was not different between experimental groups (main effect of the group: F 1,81 = 1.200, P = 0.277) (Fig. 6B) 20 min after drug administration. Microinjection of drugs into the DMH altered the heat-evoked activity of RVM On-like cells (main effect of the drug: F 2,98 = 5.010, P = 0.009) but this effect did not vary with the experimental group (interaction group x drug: F 2,98 = 1.318, P = 0.272). Post hoc testing failed to find significant drug treatment-induced effects on the heat-evoked response of On-like cells (Fig. 6C). Similarly, drug administration in the DMH changed the heat-evoked activity of RVM Off-like cells (main effect of the drug: F 2,81 = 4.967, P = 0.009) but these differences did not vary with the experimental group (interaction group x drug: F 2,81 = 2.230, P = 0.114), 20 min after administration. Again, post hoc testing failed to find significant drug treatmentinduced effects on the heat-evoked response of Off-like cells, except for the increase of response after exogenous GAL treatment in the SHAM group (Fig. 6D). Discussion This study demonstrates, for the first time, a pronociceptive role for supraspinal GAL, as the administration of this neuropeptide to the DMH significantly increased spinal nociception (as indicated by the decrease in PWL) in awake healthy and arthritic animals. Moreover, the microinjection of GAL receptor agonist/antagonist in the DMH showed that the exogenous GAL's pronociceptive effect was mediated by GALR1 but not GALR2. The analysis of c-Fos expression revealed the serotonergic RVM and DRN, particularly in SHAM animals, as caudal areas potentially involved in signalling this descending pronociceptive effect. The exogenous GAL-induced increase of c-Fos expression in the RVM may not be explained by action on RVM On-like or Off-like pain modulatory cells, as the discharge rates of these two nonserotonergic cell types remained unaltered during pharmacological manipulations in the present study. Novel pronociceptive effect of supraspinal GAL Administration of exogenous GAL into the DMH induced behavioural hyperalgesia (decreased PWL) in healthy and ARTH animals. This is a novel effect for GAL as previous studies had only reported an antinociceptive role of this neuropeptide after its administration in brain areas involved in pain modulation, such as the hypothalamic arcuate nucleus [40], central AMY [20] and the PAG [22]. Thus, and similarly to what is observed at the spinal cord level [41,42], GAL appears to have a bidirectional role in supraspinal descending pain modulation depending on the area where GALRs are activated. The demonstration of a tonic pronociceptive effect of GAL, by treatment of the DMH with a non-specific GALR antagonist, supports the proposal that the pronociceptive effect of GAL was mediated by GALRs. Administration of exogenous GAL in the DMH facilitated nociception in both ARTH and SHAM animals. This finding contrasts with the results of previous studies indicating that exogenous GAL is antinociceptive when administered in the hypothalamic arcuate nucleus of animals with inflammation [17], or in the PAG of animals with mononeuropathy [22]. Importantly, the present results show that the descending GAL-driven pathway originating in the DMH, unlike the glutamate-driven pathway [9], remains functional in animals with experimental monoarthritis. Administration of a non-specific GALR antagonist alone into the DMH of ARTH animals had no effect on nociception while it produced antinociception in SHAM controls. This finding suggests that the GAL-driven pathway descending from the DMH is not tonically active in ARTH as in SHAM animals, but its activation in ARTH animals depends on the activation of upstream pathways inducing the release of GAL in the DMH. GAL-driven nociceptive facilitation is mediated by GALR1 Further analysis on the contributions of GALR1 and GALR2 to the pain modulatory role of GAL in the DMH demonstrated that the facilitatory effect of GAL is mediated by GALR1, a receptor that couples to the Gi/Go pathway to decrease adenylyl cyclase activity [43]. Once more, this result contrasts with the available literature, where the activation of this receptor at spinal and supraspinal levels is reported to elicit an antinociceptive effect [44][45][46]. In fact, in the spinal cord it was GALR2 that has been reported to have a pronociceptive effect [41]; however, the results on administration of a GALR2 antagonist in the present study indicated that endogenous GAL acting on GALR2 had a tonic antinociceptive action in SHAM animals, whereas blocking GALR2 did not alter nociception in ARTH animals. Another possibility would be that the differential distribution of GALR1 and GALR2 receptors in the DMH could contribute to enhance GALR1-dependent effects, however as demonstrated by Mitchell and collaborators [47], not only does mRNAs analysis confirm an overlapping of GAL-R1 and GAL-R2 in the DMH but both receptors are also highly expressed in this nucleus. On the other hand, the expression of both receptors in the DMH does not account per se for the GAL/DMH pronociceptive effect since these receptors are also highly expressed and overlapping in the arcuate nuclei, an area where the intracerebral administration of exogenous GAL promotes antinociception [17]. Overall, it is probable that the facilitation of nociceptive behaviour by GAL in the DMH of ARTH animals results (i) from disinhibition of pronociceptive pathways driven by GALR1 and/or (ii) from a decrease in the activity of antinociceptive GALR2-driven circuits. It is also possible that behavioural hyperalgesia in ARTH animals is reinforced by their emotional-like status. A recent study from our group [48] showed that animals with experimental monoarthritis displayed depressive-like behaviour. Interestingly, Blackshear et al. [24] showed that the intracerebroventricular injection of GAL and M617 increased c-fos expression in the DMH and the AMY, a nuclei involved the modulation of the emotional component of pain. Another work [49] showed that acute activation of GALR1 promoted the expression of 'prodepressive-like' behaviours, while GALR2 mediated the 'antidepressant-like' effects of GAL. Hence, taking into account that depressive states heighten pain perception in humans [50] and rodents [51], the pronociceptive GALR1 and the antinociceptive GALR2 effects observed in this study may be related to comorbid mood alterations known to be associated with chronic pain [52,53]. Activation of serotonergic nuclei is influenced both by exogenous GAL in the DMH and noxious peripheral stimulation The analysis of c-Fos expression was restricted to the VLPAG, DR, LC and RVM since these areas have been previously demonstrated to be strongly modulated by the DMH [54][55][56], while simultaneously implicated in nociceptive processing [57][58][59]. The limb extension-induced increase in c-Fos expression in the VLPAG and RVM of SHAM animals suggests that repetitive extension of a non-arthritic knee joint for a period of two hours can be considered a noxious stimulus [60]. In addition, the increased c-for expression in the VLPAG and RVM also suggests that repetitive knee joint extension activated the feedback loop of nociception involving the PAG-RVM-spinal dorsal horn circuitry, which may either inhibit or facilitate nociception [11,61,62]. Administration of exogenous GAL into the DMH increased the expression of c-Fos ipsilaterally in the VLPAG and bilaterally in the RVM, which suggests that DMH neurones expressing GALR are able to activate descending nociceptive controls. However, our electrophysiological data shows that exogenous GAL in the DMH did not alter the activity of RVM On-and Off-cells that are nonserotonergic pain control neurones. Therefore, we propose that the RVM cells expressing c-Fos following exogenous GAL treatment may have been RVM Neutral-cells, a subpopulation of which are serotonergic [63] and which were not studied in the present electrophysiological experiment. The fact that the DMH GAL-driven descending pronociceptive drive is independent of RVM On-and Off-like cell activity is very interesting in terms of pain management, since many centrally acting analgesic compounds (opioids, cannabinoids and non-steroidal anti-inflammatory drugs) reduce pain by increasing the discharge rate of antinociceptive RVM Off-cells and/or by inhibiting the discharge rate of pronociceptive RVM On-cells [61]. In SHAM animals, repetitive limb extension alone or exogenous GAL administration alone in the DMH activated the descending PAG-RVM-spinal cord pathway as revealed by c-Fos expression. However, application of exogenous GAL simultaneously with repetitive extension of the limb failed to increase c-Fos expression in the PAG-RVM circuitry of SHAM animals, suggesting that together the two stimulation procedures counteracted each other's effect, leading to a general inhibition of this circuitry. The increased c-Fos expression in the RVM by the pronociceptive exogenous GAL treatment alone might reflect activation of RVM serotonergic cells. While the serotonergic system has a complex role in pain control, there is evidence suggesting that the net effect induced by RVM serotonergic neurones is facilitation of nociception [64]. It should be noted here that serotonergic RVM neurones are not On-or Off-cells [63] that were studied in the present electrophysiological experiment using noxious heat and shown not to be influenced by exogenous GAL. We propose that the GAL-induced descending action may have induced activation of medullo-spinal serotonergic neurones shown as increased c-Fos expression in the RVM and resulting in the relay of pronociceptive action to the spinal cord. The increased expression of c-Fos in the serotonergic DRN after limb extensions is in line with a role of this nucleus in ascending [65] and descending [66] pain modulatory pathways. Similarly, increased expression of c-Fos of DRN after exogenous GAL in the DMH is not unexpected as the DMH projects directly to the DRN [67] and the activity of DRN serotonergic neurones is influenced by GALR1 present on their soma and proximal dendrites [68]. It still remains to be studied through which mechanisms the DRN might be involved in the relay of the descending pronociceptive effect driven by exogenous GAL in the DMH. Activation of the noradrenergic LC by exogenous GAL in the DMH and noxious peripheral stimulation varies between SHAM and ARTH animals Previous studies have demonstrated that the LC responds to noxious stimulation, as revealed e.g. by c-Fos expression [69], while it is a major source of spinal noradrenaline and descending noradrenergic control of nociception [70,71]. In the present study, exogenous GAL treatment of DMH alone failed to influence c-Fos expression of LC in SHAM or ARTH animals. However, following repetitive limb extensions, c-Fos expression of LC was increased in SHAM but not ARTH animals. Interestingly, the peripheral stimulation-induced increase of c-Fos expression in the LC was predominantly ipsilateral, while ascending nociceptive signals activate the LC contra-or bilaterally [71]. A potential explanation for the ipsilaterally increased c-Fos expression after peripheral stimulation in the present study is that it reflected activation of descending pain modulation pathways descending predominantly ipsilaterally rather than processing of the ascending afferent volley that is expected to be contra-or bilateral. The DMH has a strong galaninergic output to various brain areas [72], including the LC [65], and GAL has been shown to decrease neuronal firing in LC [73]. While these findings suggest that the DMH may directly modulate activity of the LC, they still leave open what is the underlying mechanism and functional significance of the finding that exogenous GAL treatment of the DMH together with repetitive limb extensions increased c-Fos activity in the LC of ARTH but not SHAM animals. Influence of arthritis and repetitive limb movement The increase of c-Fos expression in the VLPAG and to a lesser extent in the RVM of SAL-treated ARTH animals indicates an overall increase in the tonic activity of the PAG-RVM-spinal cord pathways after the induction of experimental monoarthritis, which is in accordance with the enhancement of descending inhibitory circuits during chronic inflammation [74][75][76][77]. Interestingly, limb extension in the ARTH group decreased c-Fos expression in the VLPAG suggesting that acute noxious mechanical stimulation of the injured knee dampens tonic descending inhibition mediated by the VLPAG. On the other hand, the increase in c-Fos expression in the RVM, taking into account that the RVM can either facilitate or inhibit nociception [78], could indicate that this nucleus is engaged in descending facilitation during acute noxious stimulation of ARTH animals, as shown for other chronic pain disorders [79][80][81]. Our electrophysiological results showed that before any drug treatments both the baseline and the peripheral stimulus-evoked response in antinociceptive RVM Off-like cells were lower in ARTH than SHAM animals, while there was no difference in the pre-treatment heat-evoked activity of pronociceptive RVM Onlike neurones of ARTH and SHAM animals. This finding suggests that a decreased activity of RVM Off-like cells contributes to hyperalgesia in ARTH animals. However, it does not exclude the possibility that among descending facilitatory mechanisms contributing to hyperalgesia in ARTH animals were other cell types of the RVM, in particular medullospinal serotonergic neurones, or other brainstem nuclei. Concerning the DRN, the expression of c-Fos after repetitive limb extensions was increased, when compared with SAL-treated ARTH animals, but similar to the expression in c-Fos in SHAM after limb extensions, suggesting that the nociceptive processing through this pathway is not enhanced after the induction of experimental monoarthritis. By contrast, it is possible that the noxious stimulation-evoked activation of the LC is impaired in ARTH animals, since c-Fos expression was decreased when compared to SHAM animals after limb extensions and unaltered when compared to SAL-treated ARTH animals. Without noxious stimulation, exogenous GAL in the DMH of ARTH animals appeared to dampen tonic descending inhibition (as indicated by decreased c-Fos expression in the VLPAG) while it enhanced the tonic activity of pronociceptive serotonergic (indicated by increased c-Fos expression in the DRN and RVM), but not noradrenergic (unaltered c-Fos expression in the LC) circuits. However, when combined with limb extensions, both tonic descending inhibition (decreased c-Fos expression in the VLPAG) and the activity of pronociceptive serotonergic areas (decreased c-Fos expression in the RVM and DRN) were diminished. This finding indicates that in ARTH animals, exogenous GAL in the DMH exerts differential effects under basal and noxious stimulation-evoked conditions. A differential effect has also been reported while studying the role of GAL in the presence/absence of stress [49]. By contrast, only the combination of exogenous GAL injection in the DMH and limb stimulation was able to enhance the activity of the noradrenergic pain system as evidenced by the strong increase of c-Fos expression in the LC. Although noradrenergic pathways were up to recently considered to exert mostly inhibitory influences on spinal nociception, Hickey and collegues [59] recently demonstrated that a specific subpopulation of LC neurones enhances the processing of nociceptive information and could thus partly contribute to behavioural hyperalgesia in chronic inflammation. Further studies are needed to find out whether LC is involved in mediating the descending pronociceptive effect elicited by exogenous GAL in the DMH. Conclusions In the present study, we demonstrate a pronociceptive GALR1mediated role for hypothalamic GAL in experimental monoarthritis. Exogenous GAL in the DMH appeared to exert differential effects upon the brainstem pain modulatory areas; the effect varied between the experimental group (healthy or arthritic animals), brainstem nucleus (PAG, RVM, DRN, or LC), and the presence or absence of concomitant noxious stimulation. Finally, the results suggest that further studies evaluating the potential applicability of GALR1 antagonists in the control of chronic inflammatory pain are needed.
2016-05-16T10:18:54.836Z
2014-11-18T00:00:00.000
{ "year": 2014, "sha1": "b77896a494c4dc5326e7eb620853beb5d8de3efd", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0113077&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b77896a494c4dc5326e7eb620853beb5d8de3efd", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
225688963
pes2o/s2orc
v3-fos-license
Gualberto Valverde, Rebeca. 2020. “Performing Disobedience: Domestic Transgressions and Political Transformation in Elizabeth Cary’s The Tragedy of Mariam.” Nordic Journal of English Studies 19(1):205-223. Performing Disobedience: Domestic Transgressions and Political Transformation in Elizabeth Cary’s The Tragedy of Mariam The aim of this article is to probe instances of dramatic self-construction through the performance of disobedience as enacted by the female protagonists of Elizabeth Cary’s The Tragedy of Mariam, critically exploring, in close relation to one another, Mariam’s changing self-presentation from public loquacity to purposeful stoic silence, and Salome’s transgression of the sex-gender system. As will be argued, these two performances of female subjectivity trigger a current of social change by destabilizing the naturalized patriarchal authority that sustains political order. For, as it will be explored, the public self-construction of feminine identities in Cary’s play—mostly through the utterance of a public speech—creates a dramatic and textual space in which rebellious and transformative notions of female selfhood can negotiate the timely tensions between moral permanence and political change. Introduction In her book about women's fairy tales of the seventeenth century, Patricia Hannon argues that early modern representations of characters in-flux-specifically, female characters-have the subversive effect of undermining the social hierarchy, insofar as social order is sustained on a fixed categorization of identities and thus cannot easily accommodate the notion of multiple roles for one individual (1998: 82). The issue of a woman's identity was particularly problematic in this regard. As Catherine Belsey explained, "unless in the exceptional case of a woman as sovereign of the realm, women exercised no legal rights as members of the social body" (1985: 153). This posed a sociopolitical problem, since "neither quite recognized as adults, nor quite equated with children, women posed a problem of identity which unsettled the law" (1985: 153). The notion of female subjecthood was slippery and a cause of certain social unrest and thus, in such context, the dramatic representation of female characters whose identities are built through performance, that is, by means of a public self-construction that is changeable depending on circumstance, allows for the interpretation that such a theatrical construal of public female selfhood may set the ground for a political transformation of the status quo. Following this argument, the aim of this article is to probe instances of personal change and disobedient shapes of performance as enacted by the female protagonists of Elizabeth Cary's The Tragedy of Mariam, critically exploring, in close relation to one another, Mariam's changing self-presentation from public loquacity to purposeful stoic silence, and Salome's transgression of the sex-gender system. As will be argued, these two performances of a female public self, while seemingly divergent, simultaneously trigger a current of social change by destabilizing the naturalized patriarchal authority that sustains political order. This essay will therefore examine forms of personal transgression, domestic disobedience and political subversion that stem from the character's duplicitous behaviour, from their fluid self-defining and selfconstructing performance and from the utterance of their defiant public speech, in consequence elaborating the argument that this public construction of feminine identities in Cary's play arguably creates a dramatic space in which multiple and transformative notions of female selfhood can negotiate the timely tensions between social and moral permanence on the one hand, and political change on the other. Transgressing Privacy: Closet Drama as Public Performance After much recent criticism has finally drawn proper attention to it, The Tragedy of Mariam-an early Jacobean closet drama composed by Elizabeth Cary sometime between 1603 and 1606 (Wray 2012: 11) and first published in 1613-has become well known among early modern scholars. Following as a main source Thomas Lodge's translations of Josephus's Antiquities of the Jews, Cary's play tells the story of Mariam, the second wife of Herod the Great, the tyrannous king of Judea from the year 39 to the year 4 B.C. The play takes place on a single day, when King Herod is mistakenly thought to have been killed by Caesar after a visit to Rome. When he suddenly and unexpectedly returns later on that same day, he finds Mariam unwilling to disguise the sorrow she feels at learning that her despotic husband is, after all, alive; prey to jealously, the king succumbs to his sister Salome's lies about a false attempt to kill him at Mariam's hand, for which he decides to have his wife beheaded. With Mariam's martyr-like execution ends a play which, as an instance of closet drama written by a female author, is unconventional within the corpus of English Renaissance theatre, but which at the same times offers a resonant view of some of the political and social anxieties regarding issues of authority and legitimacy that serve as backbone for earlymodern political drama. In fact, even though criticism on The Tragedy of Mariam has traditionally followed the biographical approach, mostly influenced by those editions of the play published in tandem with Cary's biography Life, written in 1645 by one of her daughters, most likely Lucy (Wray 2012: 5), 1 most recent criticism has either focused on the subject of marriage-"the battlefield of the play," as famously defined by Beilin (2014: 167) 2 -or drawn into perspective "the play's political and intellectual contexts" (Clarke 1998: 179). In parallel, however, authors such as Clarke have proposed bringing together these two places of 1 Wray criticizes how this approach, which has focused on detecting personal resonances in the play, has prioritized "a one-dimensional model of understanding" (2012: 6) that has focused on two identifications between the play and Cary's life: her marital tribulations and her religious dissent. Both are however difficult to sustain: firstly, because the writing of the play predates Cary's debatable marriage troubles and her conversion to Catholicism, and secondly because, as Wray argues, "any biographical reading is compromised because of the partiality of the extant material" (2012: 7). Life is, after all, "quasi-hagiographic" (2012: 5), "highly crafted and self-justifying" (2012: 7-8) and, as Wolf writes, engaged in "discrimination," "interpretation" and "omission" (qt. in Wray 2012: 8). The biographical approach to the play is however inescapable when revising the critical reception of the play. Noteworthy examples of this critical view are Fischer's "Elizabeth Cary and Tyranny, Domestic and Religious" (1985) and Ferguson's "Running On with Almost Public Voice: The Case of E.C." (1991). 2 Besides Beilin, some of the most eloquent studies that have scrutinized the gendered conflict within the institution of marriage as presented in Cary's play are Belsey's The Subject of Tragedy: Identity and Difference in Renaissance Drama (1985), Lewalski's Writing Women in Jacobean England (1993), Quilligan's "Staging Gender: William Shakespeare and Elizabeth Cary" (1993), and Callaghan's "Re-reading Elizabeth Cary's The Tragedy of Mariam" (1994). signification, insightfully noting "the ideological importance of marriage within patriarchal government" (1998: 179). Clarke elaborates: ...marriage in the play can be read as a multiply nuanced metaphor, that gestures towards the public world of politics and the 'private' world of the family so as to reveal their interdependence and thereby adumbrate the role of women in the public sphere as guarantors and legitimators of male supremacy. Mariam's treatment of the problematics of the obligations and bonds of a marriage that is public and dynastic enables Cary to consider the political nature of allegiance and fidelity, and the grounds upon which such bonds may be dissolved. (Clarke 1998: 179) This is the preferred analysis of this essay as well, one that considers what Miller, following Pollock, defined as the "domestic politics" (1997: 353) of the play, or the "competing structures of familial authority" (1997: 353). For it is that competition for power within the household that allows for the identification between Cary's public textual performance as an author of closet drama and the political rebellion enacted by her protagonists' silences and speeches. As Miller carefully explains, The Tragedy of Mariam is a cultural product of the upper middle class domestic structures that reared women in a double perspective: training them for the managing of estate and decisionmaking in household-related business, while simultaneously mandating chastity, silence and obedience (1997: 353). This paradox or duplicity in female instruction-subjection and independence-created an obvious conflict when a woman's independent thought and actions came into struggle with her husband's authority. As Pollock explained, the solution came by ensuring that women would "revert to secondary status whenever it was enjoined and hence not threaten the ruling supremacy of their husbands" (qt. in Miller 1997: 253). Bearing in mind these domestic politics, then, one may situate Cary's voice as an author-the author of the first original play written by a woman published in England-in the context of her circumstances as a member of a propertied class whose upbringing allowed her to "claim [an] independent speaking position" (Miller 1997: 354). Whether her textual performance in the writing and publishing of the play threatens or not the ground of male supremacy is however a matter for further discussion. Indeed, as Clarke eloquently argues, closet drama is a genre positioned in the "intersection between publication on the one hand, and private performance on the other" (Clarke 1998: 179). And, as it pertains to the argument of this essay, the particularities of closet drama-and specifically, of its oxymoronic reality as a "private performance"-are in fact a matter of consequence when it comes to elucidating the play's take on female duplicity and public self-construction as a way for political transformation. Miranda Garno writes that "closet drama was a particularly powerful vehicle for female expression because it had a reputation for falling safely within proper household boundaries" (2012: 365). Yet, as she elaborates, such a private space was not completely privatized, because "if feminine spaces were fully privatized, they would function solely under feminine governance […] Thus the closet, as a domestic enclosure with limited admission, also needed to provide a site of performative exposure" (2012: 366). The closet was thus not a private space at all, but a place of performance, a space for feminine education where women were supposed to act their gender according to the instructions provided in the conduct manuals, that is, the voice of masculine authority that supervised the feminine private space through assigning or forbidding texts for women (Garno 2012: 366). This of course entailed a specific kind of deliberate female performance that is crucial to understanding The Tragedy of Mariam. Closet drama, as a form of textual performance, certifies that, even within the private boundaries of the domestic closet, a woman "must perform overtly for those watching and interpreting her" (2012: 366). And still, the text itself, the act of creating and performing Cary's play, simultaneously defies that notion. This article aims to discuss the social and political transgressions contained in the duplicitous and fluid feminine performances in Cary's The Tragedy of Mariam, but such textual analysis would be considerably incomplete without addressing the contextual reality of Cary's own textual performance through publication, which mirrors the characters' performative rebellion. As Garno explains, "using closet drama as a vehicle, Cary could obey rules of silence and privacy by writing a didactic text intended for household reading. At the same time, she could transgress expectations of privacy and silence by producing a published document that entered into the homes of others and borrowed readers' voices" (2012: 370). After all, The Tragedy of Mariam has had a "steady readership" since the seventeenth century (Straznicky 2004: 48) and even though the form of its publication-"the semi-anonymous authorship, the retracted personal dedication, the seemingly antitheatrical formalism" (2004: 48)-may suggest a strict form of control over a woman's public speech, it seems however that the play's marketing design was actually aimed to situate it "in relation to an elite literary discourse" (2004: 48). In Quilligan's words, "for Cary, the loosing of female breath in an imagined public spectacle was simultaneously authorial freedom and sexual shame. But that act of transgression was a play, the first to be published in English by a woman" (1993: 230). Mariam and Salome, protagonist and antagonist in Cary's play respectively, do not author a public text from within the intimacy of the male-guarded closet to disobey the feminine mandate of privacy, but they do loosen their breath. As Belsey argued, In the family as in the state women had no single, unfied, fixed position from which to speak. Possessed of immortal souls and of eminently visible bodies, parents and mistresses but also wives, they were only inconsistently identified as subjects in the discourses about them wich circulated predominantly among men. In consequence, during the sixteenth century and much of the seventeenth the speech attributed to women themselves tended to be radically discontinuous, inaudible or scandalous. Like Cary, Mariam and Salome transgress their socially-assigned feminine space by speaking publically, but by doing so they also construct themselves theatrically. By executing their own liberating instance of textual performance, they manage to build a truly autonomous female selfhood. By performing their dramatic self through words, they are also allowed to shape a public self in-flux, to move beyond their assigned personal categories of wives and subjects; and it is that change in identity, that duplicity and malleability of the female public self as enacted in Cary's play that sets the ground for the social transformation advanced in the text and explored in this study. Performing Silence: Mariam's Political Disruption from Within Mariam's personal metamorphosis throughout the play is best summarized in Nandra Perry's words, when she notes "her successful transformation from a wilful (talkative) anti-heroine into a Stoical (silent) heroine" (2008: 126). This defines her as a "passive hero," someone "free to offer a radical critique of the status quo, but not to actively disrupt it" (2008: 126). This is certainly true when analysed, as Perry does, in coalescence with the imagery of Catholic martyrdom that surrounds Mariam's character, as it posits a form of resistance in which "exemplarity is contingent upon respect for natural hierarchies and an ability to work within rather than against the established social order" (2008: 126). As will be later explored, this is the point of divergence that separates Mariam's and Salome's forms of rebellion, but it is remarkable that, seemingly without breaking audience expectations of imposed female silence, Mariam's movement from loquacity to muteness in fact enacts a form of both domestic and political disobedience, since, as Lewalski notes, in Cary's play "political and domestic tyranny are fused in Herod the Great" (1993: 194), Mariam's husband and usurper king. In this context, Mariam's eventual refusal to say what her king and husband demands to hear enacts a form of rebellion that restores rather than disrupts the political status quo, precisely because Mariam's arrogation of authority through marital disobedience sets back in order the political disorder which was the result of Herod's illegitimate usurpation. Margaret Ferguson notes that the play begins "by emphasizing the ruler's illegitimacy" (2003: 266), and thus the illegitimacy of the authority that subdues Mariam as subject. King Herod is an Idumean (Cary 2012: 1), that is, an Edomite or converted Jew, who has "crept by the favour of the Romans into the Jewish monarchy" (Cary "Argument" 2012: 2-3). Metonymically identified with a snake, Herod is presented as illegitimately appointed to power thanks to his servitude to the colonizing force. He is immediately accused of having murdered Mariam's father-"the rightful king and priest" (2012: 4)-her brother and her grandfather, all to protect his very flimsy claim to the throne, which he sustains exclusively through his right as Mariam's husband. Indeed, as Ferguson has noted, the very wording of "The Argument" underlines the fact that Herod has appropriated a title which rightfully belongs to his wife (2003: 266): "This Mariam had a brother called Aristobolus, and, next him and Hyrcanus, his grandfather, Herod in his wife's right, had the best title" (Cary "The Argument" 2012: 8-10, my italics). The claim to the throne thus belongs to Mariam, the rightful heir of the royal bloodline of Simon Maccabee and yet, it is the marital relationship between Mariam and Herod that enthrones him as rightful king and demotes her to subject. Within the power dynamics of "absolutist marriage" (Belsey 1985: 174), the legitimacy of patriarchal domination trumps the illegitimacy of Herod's client kingship. As Clarke argues, both early modern political theory and conduct literature focused on the notion that the family formed a primary unit that sustained broader social and political structures, since "the politicized discourse used in conduct books posits the domestic sphere as a miniature commonwealth where (male) authority is exercised and (female) obedience is enforced" (1998: 180). This establishes a gender hierarchy visible in Cary's play, where an absolutist political authority depends for its legitimation upon domestic patriarchal control. Herod's political power over the colonised territory of Judea is illegitimate in terms of his blood, his religion, his traitorous crimes and even his complicity with the colonizing force; yet, his patriarchal authority over his wife seems uncontested, which gives him the right to Mariam's rightful-in terms of blood, religion and character-claim to Judea's governance. As León Alfar explains, "in The Tragedy of Mariam, Herod's rule is based on usurpation, expulsion of a first wife, and violent control of a second wife. In this play, marriage, law and monarchy are institutions that confer power only on men, and wives are provided for at the whim of the husband, who inherits his authority from the monarchical line" (2008: 86). Yet the play also presents a defiant response to that unequal distribution of power. Herod's authority depends exclusively and excessively on patriarchal domination; thus, any disruption of his marital control can (and does) effectively destabilize the feeble political establishment. Such is the effect achieved by Mariam's transgression, which, while respecting the natural order of things, alters the political structures by restoring legitimacy through the public performance of marital disobedience. Ramona Wray notes that the play opens by placing "spectacular emphasis upon a speaking female sovereign and direct attention to the theme of the woman's voice" (2012: 31). 3 Indeed, the play opens with Mariam's soliloquy: "How oft have I with public voice run on / To censure Rome's last hero for deceit / Because he wept when Pompey's life was gone, / Yet when he lived, he thought his name too great?" (Cary 2012: I.I 1-4). Wray explains these lines: "Mariam describes herself as someone used to speaking often ('How oft'), at length ('run on'), in public ('with public voice'), critically ('To censure') and on no less a subject that the politics of empire" (2012: 31). Indeed, Mariam is directly addressing Caesar and admitting to having censured him in public for being a hypocrite when he cried after Pompey's death. But she continues: "But now I do recant, and, Roman lord, / Excuse too rash a judgement in a woman. / My sex pleads pardon; pardon then afford; / Mistaking is with us but too too common" (Cary 2012: I.I 5-8). As Wray contends, Mariam is vilifying herself for her censuring and defiant public speech, and she is doing so in a way that drives the audience's interest towards her gender (2012: 31-32). She asks for forgiveness for a transgression that she attributes to a common feminine mistake but, significantly, she does not choose to remain quiet after uttering these words: they are in fact the first eight lines in a soliloquy of seventy-eight, a public, performative monologue that epitomizes what is clearly perceived by others as Mariam's character flaw, that is, her public loquacity. The dissonance between what Mariam says and what she does could eloquently demonstrate that, as some critics have argued, the play is "distressingly (or at its cultural moment necessarily) contradictory, especially in regard to the issue of women's silence" (Lewalski 1993: 179), but it certainly expresses through dramatic performance a certain duplicity in Mariam's character that allows her to remain within the moral boundaries associated with female silence while transgressing that particular code of behaviour. Meaningfully, the King's sister Salome, her adversary, accuses her tongue of being "so quickly moved" (Cary 2012: I.III 21), and Sohemus, a counsellor of Herod that betrays him out of loyalty and admiration for Mariam, recognizes that "Unbridled speech is Mariam's worst disgrace / And will endanger her without desert" (2012: III.III 65-66). Mariam's original transgression is thus publicly acknowledged by both friends and foes, as it clearly contravenes a social prescription the consequence of which is realizing female subjectivity. As Belsey argues, when Mariam speaks, she expresses meaning "located in a consciousness united with the utterance which is its outward expression" (1985: 172). Mariam shapes her own subjectivity by trespassing the social mandate of feminine silence. This mandate is clearly expressed by the Chorus, which conveys the dominant ideology constricting of female freedom, but also, as seen, the beliefs of Mariam herself, who also abides that prescription. Mariam's transgression is thus inextricable from a form of duplicitous behaviour that allows her to challenge the status quo from within. Still, the mandate of the Chorus is worth commenting: That wife her hand against her fame doth rear That more than to her lord alone will give A private word to any second ear. (…) Then she usurps upon another's right That seeks to be by public language graced; And, though her thoughts reflect with purest light, Her mind, if not peculiar, is not chaste. For in a wife it is no worse to find A common body than a common mind. And every mind, though free from thought of ill, That out of glory seeks a worth to show, When any's ears but one therewith they fill, Doth in a sort her pureness overthrow. (2012: III Chorus 13-16, 19-34) The Chorus of course gives voice to the ethical framework that embeds Mariam's tragedy but, as Garno has noted, the ideology contained in these words is cognate with that expressed by early modern conduct literature: "The Chorus and its conduct literature cohort fear the disjunction between women's interior and exterior states. According to both, women should use bodily behaviour to externalize their chastity; they should perform silence and obedience before a viewing masculine audience" (2012: 365-366). What the chorus thus expresses is not so much a condemnation of women's public speech, but a fear of what lies beneath: women's autonomous performance, a form of duplicity, of the hypocrisy that Mariam censured in Caesar and that she recognizes in her own mixed feelings, first when she believes that Herod has died-"Now do I find, by self-experience taught, / One object yields both grief and joy" (Cary 2012: I.I 9-10)-and later after she learns that he remains alive: "And must I to my prison turn again? / Oh, now I see I was an hypocrite: / I did this morning for his death complain, / And yet do mourn because he lives ere night" (Cary 2012: III.III 33-36). This moment, in the third scene of Act Three, is the moment that signals Mariam's transformation as it marks her transition into a representation of silent integrity as a constituent of her self. Mariam's second and definitive transgression-performed through silence-subverts expectations of female silence and what it entails precisely because she recognizes the hypocritical shape of her public speech. Mariam recognises that she has pretended before and she recognizes (and confesses) the power in such performance: "I know I could enchain him with a smile / And lead him captive with a gentle word" (Cary 2012: III.III 45-46). Now she refuses to continue that particular form of self-presentation, even though noticing that she could upend the gender hierarchy so that her husband would become her prisoner. But she declares: "I scorn my look should ever man beguile / Or other speech than meaning to afford" (2012: III.III 47-48). When she faces Herod after his return, he asks her to smile in happiness that he is back, and she replies: "I cannot frame disguise, nor ever taught / My face a look dissenting from my thought" (Cary 2012: IV.III 59-60). The audience knows for a fact this is untrue. She can indeed frame disguise, as she recognizes she has done before, but she now refuses to do so. As Quilligan writes, "Mariam can no more play her part" (1993: 226). Apparently, she is now complying with the feminine command of silent integrity, but her silence has become a direct action of disobedience, a manifestation of rebellion against her husband's authority, a refusal to "give her mind to a tyrant" (Belsey 1985: 173), and thus a subversion of the social (dis)order sustained in that tyrannical authority. As Bennett explains, and this is key to understanding the autonomy and the changing nature of Mariam's selfhood, Mariam's ultimate resistance to oppression and tyranny is depicted as a deliberate refusal to (re)engage in dissimulation once she hears of her husband's survival and return. In order to maintain her articulation of herself as an autonomous subject, she must resist all temptations to reformulate her means of agency. Personal integrity is thus not necessarily a natural state, but a careful self-construction, a resistance to the expediency called for in both marital and political realms. (2000: 301-302) Mariam is still playing her part, but her part has changed. Mariam's new performance of herself, defined by a deliberate and carefully constructed silence, triggers Herod's jealousy. Suspicions of dishonesty and disloyalty immediately undercut his legitimacy, fully dependent on, as argued, his patriarchal authority over Mariam. As his power falls apart, Herod falls prey to Salome's plot against Mariam and believes that she has tried to poison him. When he accuses her, Mariam's only reply is: "Is this a dream?" (Cary 2012: IV.IV 27). When he reproaches her that she has tried to kill him because she loves his counsellor Sohemus, she answers: "They can tell / That say I loved him. Mariam says not so" (2012: IV.IV 35-36). Her answer is a refusal to answer, she says what she does not say. Once again, Mariam offers a deliberate performance of a textually-constructed silence that ultimately becomes an act of disobedience and rebellion in the face of death. This, an eloquent expression of the "martyrological imagery" (Perry 2008: 126) that adorns Mariam's tragic demise, suggests that it is her silence and not her original bold speech that is validated in the end (Perry 2008: 126), but, as argued, her performance of silence and integrity constitutes a "refusal to conform in speech or appearance (…) [that] undoes the dynamic of power that stabilizes [Herod's] position" (Clarke 1998: 192). The consequence is therefore the undermining of the political system, even if that transformation of the social status quo constitutes a return to legitimacy and thus to the natural order of things. In this sense, Mariam's parallel disobedience of patriarchal mandates may be read as opposite to Salome's wilful subversion of those same patriarchal hierarchies. Freed from Patriarchy: Salome's Subversion of the Order of Things Following Jeanne Roberts, Boyd Berry claims that it is useful to consider that the first two acts of the play, when Herod is absent and presumed dead, "imagine or wish for a utopian absence of patriarchy" (1995: 259), since the beginning of the play "presents women acting as if freed from patriarchy" (1995: 259). In particular, Salome's actions in this imagined world without patriarchy are vividly subversive. She expresses her desire to get divorced from her husband Constabarus because she wishes to marry her new lover, Silleus, the king of Arabia. Her desire entails breaking with the Mosaic law, which grants only men the right to repudiate their wives and divorce them at whim. Taking thus advantage of the legal indeterminacy caused by Herod's supposed death, Salome very openly confronts Constabarus with her demands: "Thy love and admonitions I defy! / Thou shalt no hour longer call me wife; / Thy jealousy procures my hate so deep / That I from thee do mean to free my life / By a divorcing bill before I sleep" (Cary 2012: I.VI 41-46). Constabarus's reply, a reaction "both to Salome's vociferousness and to her sexual aggression, the reverse of traditional feminine silence and chastity" (Beilin 2014: 168), could not be more eloquent: Are Hebrew women now transformed to men? Why do you not as well our battles fight And wear our armour? Suffer this, and then Let all the world be topsy-turned quite! Let fishes grave, beast swim, and birds descend; Let fire burn downwards whilst the earth aspires; Let winters heat and summer's cold offend; Let thistles grow on vines and grapes on briers! (Cary 2012: I.VI 47-54) As León Alfar notes, Constabarus's words "reaffirm[] as natural a sex-gender system that putatively guarantees systems of power and inheritance" (2008: 62). In opposition to this, "Salome's seizure of male prerogative, accompanied by so cynical a view of law, shakes the proper order of things" (Bailin 2014: 167). Indeed, as far as Constabarus is concerned, Salome's arrogation of masculine power entails a transformation of the self in terms of gender identity: she has now become a man, and that metamorphosis has the effect of turning upside down the entirety of the natural world. In Constabarus's words, the rhetoric that naturalizes patriarchal hierarchies-and thus the political establishment in the play, which is sustained, as argued, solely on patriarchal domination-is quite transparent, but so is Salome's undermining of such dominant discourse when she replies: "I mean not to be led by precedent. / My will shall be to me instead of law" (Cary 2012: I.VI 79-80). Bennett has defined this moment as "an overt renunciation of legitimate authority and the status quo through a wholehearted embrace of the selfish chaos of will" (2000: 303). The result is now a disruption of social and political order towards disorder, a form of subversion not from within but against the status quo. Because Salome's "militant feminism" (Belsey 1985: 174), her wilful arrogation of masculine power, delegitimizes the naturalized sex-gender system, that is, the proper, "divinely-arranged" (Beilin 2014: 168) order of things; and yet, at the same time, her rebellion functions in parallel to Mariam's, as it thwarts, by effectively dismantling the patriarchy, Herod's absolutist marital dominion and thus his political legitimacy as the master of Mariam's right to the throne. In consequence, both political legitimacy and patriarchal authority are revealed as social constructions that can be undone, for, as Raber notes, "Salome's speech confutes theories based on 'natural' order or natural categories, pointing out that all relationships are constructed, and thus manipulable by the individual" (1995: 336). Even more revolutionarily perhaps, Salome realizes and celebrates the transformative power of her actions in terms of social progress. As she declares, "Though I be first that to this course do bend, / I shall not be the last, full well I know" (Cary 2012: I.VI 61-62). Salome initiates then a sort of sexual revolution that seems to lead society towards disorder. In this process of social revolution, her "selfish chaos of will" (Bennett 2000: 303) is fundamental, for it prevents that Salome's speech and actions are in any way sanctioned, which contributes to the construal of her rebellion as a wilful destabilization of social order. In general terms, as Clarke argues, divorce frequently "represents the breakdown of a social and political order grounded upon the right ordering of the hierarchical relationships between man and woman, governor and subject" (1998: 183), 4 but in the case of Salome in The Tragedy of Mariam, the wish for divorce emerges from what Clarke defines as her "transgressive sexual desire and political ambition" (1998: 183). These are, of course, inadmissible grounds for the dissolution of marriage both under Judaic law and within the seventeenth century context in which the play is written, and thus critics such as Clarke have seen in Salome's unjustifiable resistance a foil that highlights the complexities (and duplicities) of Mariam's rejection of Herod, since Mariam is after all refusing to obey an illegitimate monarch and adulterer for reasons of conscience, without renouncing her chastity and virtue (Clarke 1998: 184). While this study fundamentally concurs, noting the differences in Mariam's disobedience towards order and Salome's revolution of disorder, it would also like to suggest the possibility of upending the argument, as it may be reasonable to argue that Mariam's disobedience of conscience, in contrast with Salome's disobedience of will, desire and ambition, actually emphasizes the power for disruption of the latter, by, among other things, underlining the political weight of chastity as a measure of female value. While Salome's wish to divorce is propelled by sexual desire, Mariam's chastity is established from the very beginning. She recognizes that Herod, "by barring me from liberty / to shun my ranging, taught me first to range" (Cary 2012: I.I. 25-26), but the lesson was ineffectual: "too chaste a scholar was my heart / To learn to love another than my love" (2012: I.I. 26-27). The statement holds value morally and politically, because securing Herod's legitimacy and the perpetuation of his bloodline depends solely on Mariam's guaranteed chastity. As Clarke notes, "female adultery (or the suspicion of adultery) automatically impugns paternity" (1998: 186); but, given the dependence of Herod's flimsy political legitimacy on his patriarchal authority, suspicions of sexual disloyalty also impugn the public system of hierarchies in the Kingdom of Judea. It is Salome's false accusation that "Mariam hopes to have another King" (2012: I.III. 3) that eventually gets Mariam killedan accusation that eloquently combines the sexual desire and political ambition that colour Salome's own transgression "in order to propose Mariam as doubly transgressive" (Clarke 1998: 187). Indeed, her infidelity would make Herod's political authority tumble along with his patriarchal control. Both transgressions are inextricable. When at the end of the play Herod can no longer control Mariam's performance of silence and speech, there is only one way to execute the authority that sustains him as king, and that is to have her killed. Mariam, as already discussed, reacts by carefully constructing a deliberate performance of silent integrity but, paradoxically, that performance of integrity-which so clearly resembles Catholic martyrologies (Perry 2008: 126)-closes the circle that conjoins Mariam and Salome's shapes of resistance, because in the end Mariam's disobedience is also sexual, even if chaste. As she says to her good friend Sohemus: "I will not to his love be reconciled, / With solemn vows I have forsworn his bed" (Cary 2012: III.III. 15-16). Mariam replaces the wedding vows that bind her to her husband and that sustain the political structure of the realm by private vows she makes to herself to unbind her body from Herod's bed. Her sexual disobedience is not adultery but refusal; it preserves her autonomy and sabotages the king's authority. It is a form of sexual rebellion that upholds her chastity. Mariam is once again contained within the ethical framework that her actions critique, but simultaneously she executes a form of sexual control over Herod that disempowers him and which places her not so much in contrast with Salome's transgressive sexuality, but in line with it. 5 As Clarke argues, "Mariam's repudiation of sexual relations amounts to a nullification of the bonds and obligations of marriage and a dissolution of the union that subordinates female identity to male" (1998: 189). It is in this dissolution of the heteronomous union of man and woman in marriage that Mariam's rebellion mirrors Salome's rejection of female subordination within the proper order of things. Politically, Mariam's disobedience seeks to restore legitimate authority, but her undermining of patriarchal authority has a broader reach, because, like Salome's arrogation of male prerogative, it has the effect of disordering the naturalized world order that legitimizes patriarchal structures. By reclaiming sexual power over her husband, Mariam is breaking custom and seizing a man's right, thus contributing to, in Constabarus's words, "let[ting]the world be topsy-turned" (2012: I.VI. 50). Conclusions Wray argues that "while the play tempts us to regard Mariam and Salome as ideologically at odds (…) it simultaneously steers us towards recognizing an experiential common ground: neither is content with the status quo and neither plays a traditional wifely role" (2012: 38). Yet, as argued, the connection between their actions and refusals, between their vociferousness and silence-the contradictory shapes of their performance-goes beyond the unconventionality of their roles as wives. It goes beyond the debasement of Salome's lustfulness and dishonesty as a foil to Mariam's chastity and integrity. Even more so, while Mariam's virtuous rebellion for the sake of order and legitimacy could be said to be validated by her "transfiguration" (Beilin 2014: 171) into a Christ-like figure that transcends the limits of earthly authority, she must pay for her transgression with the effacement of her life, body and words in the last act of the play. It seems hardly an uncontested triumph if the woman ends up murdered, especially when drawing attention to the complementary nature of Mariam and Salome's forms of sexual and political disobedience. In the end, Salome does not get divorced, but she lies, cheats and conspires to get Constabarus killed and remains unpunished, free to marry her new lover and satisfy both her sexual and political desires in a way that, in effect, as Wray explains, "confounds normative generic and moral expectations" (2012: 52). That would be the main conclusion of this study: how the transgressive identities of Mariam and Salome-as identities that can transform, change and go beyond the limits imposed upon them by convention and authority-confound the boundaries of gender, morality and hierarchy. Mariam and Salome's public actions and speeches perform two shapes of female selfhood in-flux, which eventually disrupt the established parameters of gender normativity. This gender normativity and its associated relationships of power and domination, in charge of sustaining social and political hierarchies through a fixed categorization of identities and gender roles, becomes abnormal as the unstable identities of women as wives and subjects are disrupted, effectively undermining the social and political structures represented in the play-arguably, the social and political structures that articulate the institutions of absolutist monarchy, empire and the patriarchal family. Salome's adultery and ambition presents sexual desire and political determination as viable options for female subjects; Mariam's refusal first to be quiet, later to speak, and finally to comply and obey both sexually and politically offers the possibility of female disobedience shaped within a performance of integrity. In both cases, a transformative and transgressive presentation of the self becomes the vehicle for social change. As Bennett writes, the circumstances, actions and speeches of Mariam and Salome "demonstrate the distinctly performative nature of gender roles in early modern England. In making such a vivid distinction between Mariam's or Salome's inner convictions or desires and their outer conduct, Cary reveals the ways in which women could fabricate public characters and adapt those personae to their environments" (Bennett 2000: 306). It is precisely the changing, performative nature of female public personae that makes them subversive. Herod accuses Mariam of being "a painted devil" and "white enchantress" where "hell itself lies hid / Beneath [her] heavenly show" (Cary 2012: IV.IV 17-18, 45-46); and Constabarus tells Silleus that Salome "is a painted sepulchre / That is both fair and vilely foul at once" (2012: II.IV 41-42). It is therefore female transgressive performance, the refusal to comply and commit to one single category, to act their gender and place, which presents in the dramatic context of Cary's play two metamorphic female identities that manage to crack the foundations of patriarchal order. Mariam and Salome, and their duplicitous and disobedient identities, reveal how early modern women could subvert masculine law and order, as their performance, public speech and self-representation undermine the delicate power structures that sustain and legitimize the patriarchy and its political institutions, thus offering an alternative for social and political transformation.
2020-10-30T09:03:32.217Z
2020-06-24T00:00:00.000
{ "year": 2020, "sha1": "4c805133b286b538e1df9ec185c2e919414cd4ad", "oa_license": "CCBY", "oa_url": "http://njes-journal.com/articles/10.35360/njes.520/galley/489/download/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4b32c2da05897fb5572e5cb050045f2e5e1db0b0", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [ "Sociology" ] }
249330685
pes2o/s2orc
v3-fos-license
Traditional Fermented Foods and Beverages from around the World and Their Health Benefits Traditional fermented foods and beverages play an important role in a range of human diets, and several experimental studies have shown their potential positive effects on human health. Studies from different continents have revealed strong associations between the microorganisms present in certain fermented foods (e.g., agave fructans, kefir, yeats, kombucha, chungkookjang, cheeses and vegetables, among others) and weight maintenance, reductions in the risk of cardiovascular disease, antidiabetic and constipation benefits, improvement of glucose and lipids levels, stimulation of the immunological system, anticarcinogenic effects and, most importantly, reduced mortality. Accordingly, the aim of this review is to corroborate information reported in experimental studies that comprised interventions involving the consumption of traditional fermented foods or beverages and their association with human health. This work focuses on studies that used fermented food from 2014 to the present. In conclusion, traditional fermented foods or beverages could be important in the promotion of human health. Further studies are needed to understand the mechanisms involved in inflammatory, immune, chronic and gastrointestinal diseases and the roles of fermented traditional foods and beverages in terms of preventing or managing those diseases. Introduction Traditional fermented foods and beverages (TFFB) occupy an important place in human diets. The earliest evidence of the use of fermented foods and beverages comes from Asia from around 8000 B.C in the form of vessels found in archeological areas [1]. Nowadays, fermented foods and beverages are defined as "foods or beverages produced through controlled microbial growth with conversion of food components through enzymatic action" [2]. Different fermented foods are consumed around the world. It has been reported that between 5% to 40% of all food consumed by humans belongs to this group [3]. Importantly, the positive effects of fermented foods on health have made them essential in human diets. Fermented foods and beverages work in the human body through the presence of functional microorganisms and their ability to transform the chemical elements of raw (3) Enterocytes inhibit the production of pro-inflammatory cytokines IL-6*, TNF-α* and IFN-γ*, and stimulate the production of TGFB and IL-8* for lymphocyte recruitment, which maintains the immune balance. (4) With the help of Paneth cells, they also produce immunoglobulin A (IgA). (5) Paneth cells produce antimicrobial substances (alpha defensin and lysozymes). (6) They possess antioxidant properties, promoting the synthesis of protection mechanisms against reactive oxygen species (ROS), glutathione S-tranferase (GSTs), NAD (P) H: quinone reductase (NQO1) and glutamicylcysteine ligase gamma (gGCL), among others. (7) Goblet cells produce intestinal mucus. (8) L cells stimulate the synthesis of GLP-1. (9) Macrophages and dendritic cells use butyrate, stimulating the production of IL-10 and retinoic acid that also recruit lymphocytes and participate in the homeostasis of the immune system. (10) Probiotics synthesize vitamins (K, B5, B8, B9 and B12) and other substances such as lactic acid and hydrogen peroxide that act as antimicrobials. (11) They can compete and fight against pathogens. *IL-6: Interleukin-6; TNF-α: Tumor Necrosis Factor alpha, IFN-γ: Interferon gamma; TGFB: Transforming growth factor beta; IL18: Interleukin-18. Prebiotics were first described by Gibson and Roberfroid in 1955 as a "non-digestible food ingredient that beneficially affects the host by selectively stimulating the growth and/or activity of one or a limited number of bacteria in the colon, and thus improves host health" [27]. Recently, prebiotics have been defined as non-digestible and non-hydrolysable carbohydrates such as galacto-oligosaccharides, fructo-oligosaccharides, soybean oligosaccharides, inulin, ciclodextrins, gluco-oligosaccharides, xylo-oligosaccharides, lactulose, lacto-sucrose, isomaltooligosaccharides, fructans and arabinoxylan; these functions are demonstrated in Figure 2. Prebiotics help to reduce constipation, foster weight gain or loss, improve glucose and lipids control, stimulate the immune system and increase the absorbability of calcium. Additionally, it has been reported that probiotics have an anti-carcinogenic effect [19]. Some of the favorable effects of prebiotics, when used to colonize the host, are their ability to generate metabolites, such as short-chain fatty acids (SCFA), i.e., carbon sources in the colon which play diverse biological roles [28]. The components of prebiotics, i.e., polyunsaturated fatty acids (PUFAs), may influence diverse aspects of immunity and metabolism [29]. (1) The regulation of intestinal transit by water retention, which improves stool consistency and makes movements more fluid (1a), thereby stimulating peristalsis. (2) In adequate amounts, a feeling of satiety is given. They inhibit the absorption of simple carbohydrates and reduce blood glucose. (3) With a good source of energy, the gut microbiota achieves a proper function. (4) The production of SCFA (4a) has an impact on the intestinal pH (which, under optimal conditions, is slightly acidic), leading to the inhibition of the proliferation of pathogens. (5) SCFAs are a source of energy for enterocytes and colonocytes, (5a) improving the immune system. (6) They stimulate the growth and reproduction of beneficial gut microbiota, (6a) inhibiting the colonization of pathogenic bacteria. (7) The proper function of the gut microbiota induces hypocholesterolemia. Therefore, the aim of this review is to analyze existing information from experimental studies that comprised interventions using various TFFB and the association thereof with human health. Microorganisms Found in Traditional Fermented Foods and Beverages It is important to note that microorganisms are responsible for the characteristics of fermented foods and beverages. In other words, microorganisms delimit acidity, flavor and texture. Nowadays, the most important roles of TFFB are their health benefits that go beyond simple nutrition [41]. Table 1 presents an overview of various TFFB, including a description, their identified microorganisms and their region of origin. Africa Africa is one of the continents that depends most on fermented food and other conservation methods to achieve a satisfactory diet. "Ogi", "iru" and "gari" are fermented Nigerian foods that have been widely commercialized. However, many others are made at the household level. Ogi is a fermented corn, sorghum or millet grains. Iru is a fermented product of African carob, and gari is a fermented cassava product derived from peeled fresh roots grated to a puree that is place inside of bags for fermentation. The main microorganisms in these products are lactic acid bacteria (LAB) [43] and yeasts [43,44,72,79]. "Borde" and "Shamita" are important traditional fermented Ethiopian drinks, produced by the overnight fermentation of certain cereals by LAB [43,80]. "Togwa" is a fermented beverage from Tanzania. It can be prepared based on cassava, corn, sorghum and millet, or combinations of these. Yeasts and lactic acid bacteria are the predominant microorganisms found in Togwa [43,81]. "Amasi" or sour milk, "umqombothi" or sorghum beer, and "andamahewu", a non-alcoholic fermented cornmeal, are South African foods and drinks. "Amasi" is produced by using specific Lactobacillus such as Lactobacillus delbrueckii subsp. lactis and Streptococcus spp. "Amahewu" is made with an initial culture of LAB, while "umqombothi" is made from corn or sorghum by fermentation with wild yeast and LAB from malted sorghum adjuncts [79,80,82]. Sour porridge is a corn or sorghum food which is fermented using mainly LAB to improve and develop the palatability, flavor and nutrition. "Chibuku" is a traditional Zimbabwe sorghum beer commercially produced by fermentation with sorghum yeast, while "mabisi", "munkoyo" and "chibwantu" are traditional Zambian foods and beverages produced through fermentation with yeast and lactic acid [79]. America A large number and variety of TFFB are produced in the Americas, many of which come from pre-Hispanic times. Therefore, they are deeply rooted in the customs of most Latin Americans. An example of this is the "atole agrio", a drink consumed in Central America. Atole agrio is a fermented beverage of corn in water seasoned with aromatic spices and other flavorings (chocolate, juice, or sweet fruit pulp). It is also the base for other pre-Hispanic drinks and forms one of the most typical breakfasts in Latin America [56,64]. "Chicha" is consumed throughout Latin America, most frequently in northern Argentina. It is obtained from the fermentation of corn. Another important South American drink is "masato", which is consumed mainly in Colombia, Peru and Venezuela; it is made from cassava, rice, corn or pineapple [56,61,64]. Mexico is one of the countries with the largest number of TFFB; these include "tepache" (pineapple), "pozol, tesguiño and atole agrio" (corn), "pulque" (Agave), and "colonche" (red prickly), among others [8,56]. The microorganisms found in these TFFB might be different, even if they come from the same geographical region, due to their artisanal preparation. The microorganisms present in these TFFB are Lactobacillus, Bifidobacterium, Bacillus and yeasts [61]. Asia In Asia the most common microorganisms found in fermented food are Lactiplantibacillus plantarum, Levilactobacillus brevis, Pediococcus cerevisiae, Acetobacter and Enterobacter. Especially in Korea, China and Nepal, these microorganisms are found in fermented vegetables. In Indonesia and Japan, these microorganisms can be found in fermented soybeans and rice wine, and in China in fermented tea, as described in Table 1 [3]. Europe European countries have a variety of products containing prebiotics or probiotics, such as fermented olives, milk, cheese, meat and bread, among others. In all of these, Lacticaseibacillus paracasei is the most predominant microorganism. Nowadays, kefir is one of the most popular products containing microorganisms from the Lactobacillaceae family, such as Lentilactobacillus kefiri, Lacticaseibacillus paracasei, Lentilactobacillus parabuchneri, Lacticaseibacillus casei, Lactobacillus lactis, Lactococcus lactis, Acetobacter lovaniensis, Kluyveromyces Lactis and Saccharomyces cerevisiae [83]. As mentioned, TFFB make a major contribution to dietary staples in numerous countries around the world. Fermented foods and beverages contain components that improve their nutritional qualities by modulating specific functions in the body. Interventions Involving the Use of Traditional Fermented Foods and Beverages and Their Association with Benefits on Human Health The use of TFFB to promote health benefits has become more widespread. Evidence supports their positive effects on human health [82][83][84][85][86]. The results of recent studies showed that fermented foods and beverages impact the health of consumers due to the presence of prebiotics and probiotics [19,[24][25][26]32,34,36,37]. Several experimental studies, carried out in both humans and animals, have shown that TFFB have beneficial effects in terms of modulating immune response and metabolic function. Additionally, they can improve body fat and mass, as well as blood pressure indices and brain health, providing, for example, stress relief and memory enhancement, as well as reduced risk of anxiety, depression, behavioral dysfunctions and cancer [83,[85][86][87][88]. The mechanism of action involves decreasing the production of anti-inflammatory cytokines, which play an important role in the prevention of these diseases. Also, probiotics and prebiotics have a unique metabolic capacity, i.e., they have the ability to colonize, grow and stimulate the activity of the gastrointestinal tract and immune system modulation [20,40,79]. New discoveries and advances strongly suggest that gut microbiota play an active role not only in adiposity, but also in glucose and lipid metabolism and the maintenance of the immune system and cardiovascular system, among others [89]. We will now address related studies, sorted by their continent of origin: Related Studies in America Due to promising outcomes obtained in experimental studies, potential treatments of chronic diseases using functional food have been proposed. In a study done by Padilla-Camberos et al. [30], it was reported that the ingestion of Agave fructans decreased the body mass index (BMI), i.e., decreased total body fat and triglycerides levels in adults with obesity (n = 28) who consumed a low-calorie diet (p < 0.001) [30]. Reimer et al. [90] also used inulin-type fructans (ITF) and whey protein in isocaloric snack bars. In their study of adults with overweight and obesity (n = 125), although no significant change in energy intake, body weight or BMI were observed (in contrast to the previous work), they did observe significant improvements in some aspects of appetite control following the addition of ITF and/or whey protein into snack bars (p < 0.02), as well as changes in gut bacterial composition and function [90]. In a study by Márquez-Aguirre et al. [89], in a group of n = 60 mice with high-fat diet-induced obesity, using fructans from Agave tequilana, it was proved that higher and lower intake of Agave fructans had complementary effects on metabolic disorders related to obesity (p < 0.05) [89]. In 2015, López-Velázquez et al. [91] carried out a randomized, double-blinded controlled trial in newborns to assess the prebiotic activity of Agave tequilana (Metlin and Metlos Mexican product ® ) for three months. The sample comprised 600 newborns, divided into the following groups: group 1, formula-fed with added probiotics (Lacticaseibacillus rhamnosus) + Metlin + Metlos; group 2, formula-fed with added probiotics + Metlin; group 3, formula-fed with added probiotics + Metlos; group 4, formula-fed with added probiotics; group 5, formula-fed without probiotics or prebiotics. A sixth group was also included as a positive control for breastfeeding. The results demonstrated statistically significant (p < 0.005) beneficial differences, especially in group 1, including a direct impact on the immune response (salivary IgA), bone metabolism, lower levels of total cholesterol, triglycerides and lipoproteins [91]. Beltrán-Barrientos et al. [94] carried out a study with fermented milk in prehypertensive subjects. Participants were randomized into two groups (n = 18 each group): one group was treated with milk fermented with Lactococcus lactis NRRL B-50571, while a control group was treated with artificially acidified milk for 5 weeks. The results showed that the systolic ((116.5 ± 12.26 mmHg vs. 124.77 ± 11.04 mmHg) and diastolic (80.7 ± 9 vs. 84.5 ± 8.5 mmHg)) blood pressure of the group treated with fermented milk was lower than that of the control group. Additionally, other parameters such as triglycerides, total cholesterol and low-density lipoproteins in the blood serum were lower in the group treated with fermented milk compared to the control group [94]. This effect was also described by Rodríguez-Figueroa et al. [95] in rats, where it was observed that rats presented significantly reduced blood pressure after 4 weeks of ingestion of milk fermented with Lactococcus lactis NRRL B-50572 [95]. Related Studies in Asia TFFB are particularly popular in Asia. On this continent, techniques have been developed for preserving cereals, vegetables, and meat. TFFB provide benefits to human health such as microbial stability, nutritional content and detoxification, among others [68]. Aspergillus, Rhizopus, Mucor, Amylomyces, and Bacilos are the principal microorganisms found in fermented vegetables, milk, cereal products, soybean food, starters and alcoholic beverages, as shown in Table 1 [68,96]. In a study by Rahat-Rozenbloom et al. [96], 25 male and non-pregnant, non-lactating females with BMI ≥ 20 and ≤35 kg/m 2 were divided in two groups: 12 participants in the lean group and 13 in the obese group. The participants were studied for 6 h on three separate days after consuming 300 mL water containing 75 g glucose (GLU) as a control or with 24 g inulin (IN) or 28 g resistant starch (RS). In the study, it was found that RS had favorable second-meal effects which were likely related to changes in free fatty acids rather than short-chain fatty acid concentrations (p < 0.001) [96]. Yang et al. [31] also used inulin and Camellia sinensis with a group of men and women (n = 30) and found that the continuous intake of catechin-rich green tea in combination with inulin for at least 3 weeks may be beneficial for weight management (p < 0.05) [31]. Regarding kombucha, experimental evidence suggests different properties such as antioxidant, energizing potencies, and promotion of depressed immunity [97]. Nevertheless, it is important to emphasize that this evidence was obtained via animal and in vitro experiments, not in humans [67,72,97,98]. On the other hand, Chungkookjang showed improvements in visceral fat, lean body mass and percentage of body fat in human patients with obesity [73,99]. Kimchi, another traditional Korean fermented food, has been shown to improve insulin resistance, blood pressure, and body fat, among others. In a study performed by Han et al. [69], 24 obese women were randomly assigned to either a fresh or fermented kimchi group (80 g of fresh or fermented kimchi per day (60 g/pkg × 3 meals) for eight weeks). To verify the correlation between the anti-obesity effects of kimchi and changes in gut microbiota, fecal and blood samples were analyzed. Additionally, fecal microbiota were pyro-sequenced and microarray analyses of blood samples were done. In the study, it was found that fresh and fermented kimchi exerted differential effects on obesity-related clinical parameters. Correlations of these effects with changes in blood gene expressions and gut microbial population were more evident in the fermented kimchi group than the fresh kimchi group [69]. In addition, Kim and Park conducted a study with standardized and functional kimchi intake in adults. They observed a significant decrease in levels of LDL-C (Low-density lipoprotein-cholesterol) (p < 0.05) and increased levels of HDL-C (High-density lipoprotein-cholesterol) (p < 0.01). However, fresh kimchi intake was associated with a reduction of total serum cholesterol, triglycerides and IL-6 levels, as well as an increase in adiponectin levels (p < 0.05). In the fecal analysis, the standardized kimchi and functional kimchi groups showed decreased pH, β-glucosidase and β-glucuronidase levels (p < 0.01). Furthermore, intake of kimchi, especially functional kimchi, reduced the abundance of Firmicutes, but increased levels of Bacteroidetes. In addition, intake of both types of kimchi increased the abundance of SCFA production-related genera (Faecalibacterium, Roseburia, and Phascolactobacterium) and reduced Clostridium sp. and Escherichia coli group counts. Thus, the consumption of kimchi regulates metabolic parameters and colon health [70]. Tempeh, a traditional fermented soybean product, is consumed in Indonesia. It has been shown to have beneficial effects on the immune system. In a study performed in 16 participants by Tjasa Subandi et al. [100], it was reported that tempeh increased secretory immunoglobulin A (IgA) production in the ileum and colon. Tempeh acted as a potential modulator of the composition of gut microbiota, since its consumption increased the population of A. muciniphila in the human intestinal tracts [100]. In another study from Indonesia, a symbiotic fermented milk (skimmed milk, fructooligosaccharides (FOS) Lactiplantibacillus plantarum) fortified with iron and zinc was prepared to assess its effect on infant growth. The sample consisted of 94 children under five years of age with growth retardation, randomly assigned into two groups receiving their respective treatments for 3 months: intervention group (double fortified symbiotic milk) and control group (unfortified milk). It was observed that the Z score of weight/height for age in normal children of both groups increased, although the difference between groups was not statistically significant (p > 0.005) [101]. Related Studies in Africa Africa is a continent rich in traditions and culture, as well as traditional foods. A wide range of fermented foods are produced across the continent and consumed on a daily basis [79]. Currently, there are not many clinical trials describing the health benefits of traditional African fermented foods. However, some articles have discussed the possible beneficial effects on humans of consuming these foods, based on the microorganisms that they might contain. The effects of the Lactobacillus and Bifidobacterium which are present in traditional African foods include: reduction of total cholesterol and LDL, increased levels of HDL, decreased risk of dental caries, and antimicrobial and antifungal actions, in addition to the inhibition of the growth of pathogens, modulation of the immune system and mental health, reduction of mycotoxins in fermented maize products. In another study, Mokoena et al. [43], explained that these bacteria can also help as drug-delivery vehicles [43]. Potential health benefits could be found in traditional fermented foods. However, more research is needed. Related Studies in Europe Food and beverage fermentation stands as a remarkable benchmark in the history of human society. Ancient cultures such as those of Egypt, Rome, Greece and Mesopotamia used TFFB as a medicine to treat diseases [75]. Baschali et al. [75] described that "Low-Alcoholic Fermented Beverages (LAFB) and Non-Alcoholic Fermented Beverages (NAFB) are treasured as major dietary constituents in numerous European countries". The resulting prolonged shelf-life also contributes to food security [75]. A clinical trial by Amoutzopoulos et al. [102] in Turkey showed that "hardaliye", a traditional fermented drink based on grapes, has antioxidant effects. In the study, 100 adults between 20 and 60 years of age were randomly assigned to the following groups: High Hardaliye (HH), Low Hardaliye (LH) and control group. The HH and LH groups were composed of 45 and 35 participants who consumed a daily dose of 500 mL and 250 mL hardaliye during the study period, respectively. Twenty subjects in the control group did not receive any hardaliye. It was noted in the HH and LH groups that the measurements of conjugated dienes, malondialdehyde, and homocysteine decreased significantly (p = 0.001). Furthermore, they also found a significant homocysteine reduction between HH and LH. Finally, total antioxidant capacity and vitamin C increased in both groups [102]. Dönmez et al. [103] conducted a clinical trial in Turkey with "koumiss", a traditional milk beverage produced from the fermentation of mares' milk. Eighteen sedentary men were assigned to three equal groups: koumiss (K), exercise + koumiss (KE), and exercise alone (E). At the end of the study, triglyceride and cholesterol levels were found to have decreased in all groups, but the decrease was significant on day 15 only for the KE group. HDL cholesterol tended to increase in all groups on day 15, but the increase was significant only in the KE group (p = 0.001) [103]. On the other hand, a study carried out in Italy and Spain evaluated the effect of a partly fermented infant formula using bacterial strains Bifidobacterium breve C50 and Streptococcus thermophilus 065 with a specific prebiotic mixture (short-chain galacto-oligosaccharides (scGOS)) and long-chain fructo-oligosaccharides (lcFOS; 9:1). The formula was given to 200 infants ≤28 days of age; the infants were assigned either to experimental infant formula containing 30% fermented formula and 0.8 g/100 mL scGOS/lcFOS or to non-fermented control infant formula without scGOS/lcFOS groups, with infant breastfed serving as the control group. In this study, no relevant differences were found in gastrointestinal symptoms; however, stool consistency was softer in the experimental versus the control group. Daily weight gain was equivalent for both formula groups (0.5 SD margins) with growth outcomes close to those of breastfed infants. No clinically relevant differences in adverse events were observed, apart from a lower investigator-reported prevalence of infantile colic in the experimental versus the control group (1.1% vs. 8.7%; p < 0.02). In conclusion, they found that the partly fermented formula with prebiotics induced stool consistencies closer to those of breast-fed infants [104]. In Slovenia and Croatia, a study was done to investigate the influence of a symbiotic fermented milk on the fecal microbiota composition of 30 adults with irritable bowel syndrome (IBS). The symbiotic product contained Lactobacilllus acidophilus La-5, Bifidobac-terium animalis ssp. lactis BB-12, Streptococcus thermophilus and dietary fiber (90% inulin, 10% oligofructose), as well as a heat-treated fermented milk without probiotic bacteria, while dietary fiber alone served as placebo. Stool samples were collected after a run-in period of a 4 weeks, and at a 1-week follow-up period. After 4 weeks of symbiotic (11 subjects) or placebo (19 subjects) consumption, a greater increase in DNA specific for Lactobacillus acidophilus La-5 and Bifidobacterium animalis ssp. lactis was detected in the feces of the symbiotic group compared with the placebo group. At the end of consumption period, the feces of all subjects assigned to the symbiotic group contained viable bacteria with a BB-12-like RAPD profile, and after one week of follow-up, BB-12-like bacteria remained in the feces of 87.5% of these subjects. Next-generation sequencing of 16S rDNA amplicons revealed that only the percentage of sequences assigned to S. thermophilus was temporarily increased in both groups, whereas the global profile of the fecal microbiota of patients was not altered by the consumption of the symbiotic or placebo [105]. These studies have shown, through scientific evidence, the positive effects of TFFB on human health benefits. Moreover, the benefits of probiotics and prebiotics are promising for clinical use against noncommunicable diseases (NCDs). Conclusions In recent years, the intake of TFFB has revealed benefits to human health and favorable functions on NCDs, gastrointestinal, and immune disorders, suggesting that TFFB could be used to improve human diets. Moreover, the gut microbiota composition plays an important role in metabolic disorders. Dysbiosis, or an imbalance of microorganisms in the gut microbiota associated with metabolic disorders, can potentially be modulated by probiotics or prebiotics. Several studies have shown the therapeutic effects of prebiotics and probiotics on BMI, waist circumference, accumulation of body fat, glucose and lipid levels. TFFB are beneficial and can be used as a novel tool in the multicomponent treatment of different chronic non-transmissible diseases. However, the dosage, duration of treatment and short-long-term effects of the administration of the different microorganisms, are still a matter of research. When consumed in adequate amounts, TFFB show health benefits associated with cardiovascular diseases, type 2 diabetes, obesity and neurological problems, among others. In conclusion, further research is needed to gain insights into the mechanisms involved in the treatment of diseases with prebiotics and probiotics. A better understanding of the relationship between the functions and the impact of TFFB is needed in order to develop strategies for the management of chronic gastrointestinal diseases, among other conditions.
2022-06-04T15:12:36.379Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "c7841e34959688ea0bbce91c2d88dc50f243cf0c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2607/10/6/1151/pdf?version=1654493287", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "aef33490c4d1be6cd7e595de3a9d9a153d970967", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245250466
pes2o/s2orc
v3-fos-license
Documented Skeletal Collections and Their Importance in Forensic Anthropology in the United States : Documented skeletal collections are the backbone of forensic anthropology due to their associated biohistories. This paper describes the identified skeletal collections and their relevance in forensic anthropological research, education and training in the US. The establishment of documented skeletal collections in the US can be distinguished into two modus operandi, depending on the stance towards the dead, legislation, and medical and forensic practices. In the 19th and early 20th centuries, anatomists amassed skeletons from cadaver dissections, shaped by European influences. Those skeletons compose the anatomical collections—such as the Robert J. Terry Anatomical Collection— predominantly representing impoverished and unclaimed individuals. Ethical concerns for the curation and research of African American skeletons without family consent are growing in the US. In contrast, since the 1980s, modern documented skeletal collections originated from body donations to human taphonomy facilities, such as the William M. Bass Donated Skeletal Collection. The establishment and testing of osteological methods essential to establish one’s identity—such as age at death and sex—have been developed with skeletons from documented collections. Therefore, the analysis of identified skeletons has been crucial for the development of forensic anthropology in the US. Introduction Documented skeletal collections are tightly related to the development of American forensic and physical anthropology, born out of anatomy and medicine.In recent years, the word "physical" in this terminology has been replaced with the word "biological", conveying a larger spectrum of anthropological research, emphasizing an overarching study of humans as well as living and fossil relatives.The discipline has created a distance from the mere classificatory-and to some extent hierarchical-approach to human remains that was the focus of physical anthropology, and this distance has a strong resonance in forensic anthropology.In the US, as in many countries, physical anthropology provided the basis for the development of forensic anthropology, and the creation of many documented collections, anatomically oriented, aiming to explore human variation.In the US, remains of Native Americans and African Americans were extensively represented while forming collections.Michael Little and Kenneth Kennedy's [1] book, Histories of American Physical Anthropology, offers an overview of the entangled history of these disciplines, and the relevance of documented collections to their growth.Some names that contributed to the development of US physical anthropology were Samuel Morton (1799-1851), Franz Boas , Aleš Hrdlička (1869-1943), Raymond Pearl (1879-1940), Earnest A. Hooton (1887-1954), T. Wingate Todd (1885Todd ( -1938)), Mildred Trotter (1899-1991), and W. Montague Cobb (1904Cobb ( -1990)), amongst many others-for details see [1].Of these, Samuel Morton is described as " . . .best known for his collection of 968 human crania of Native American and other populations . . ." [1] (p. 5).The Samuel G. Morton Cranial Collection emphasized research on craniology, which was essentially classificatory, with a fixation in racial identification and typology, akin to other European practices of collection creation and use, following the ideas of Johann Blumenbach [2,3].Although, one is happy to acknowledge that anthropology is slowly moving away from such a classificatory approach to human remains, emphasizing human variation [4][5][6].Alongside Samuel Morton, other names are associated with the creation of referenced collections, such as Robert J. Terry and William Montague Cobb, amongst others discussed further in this chaptersee [1] and authors therein.An online search for documented collections in the US revealed approximately 26 documented human osteological collections (Table 1).Some collections are better known than others among scholars, such as the Robert J. Terry Anatomical Collection and the William M. Bass Donated Skeletal Collection.Most of these collections were built with body donations or from cadaver dissections, medical schools, private collections, and other contexts.Anatomical collections have a special place within the development of US physical anthropology, which will be addressed below.The diversity of provenance of the skeletons explains, to some extent, the composition of the collections known to exist in the present day.Some comprise complete to almost complete skeletons, but others are represented by specific anatomical regions, including pathological skeletal specimens.Most of these collections were not established with a research design in mind, i.e., they result from the accumulation of human remains made available by donation, or other practices, contrasting with collections, such as the Morton skull collection, which was constructed in an attempt to classify humans based on morphological typologies associated with racial profiling.This paper describes the origin of the documented skeletal collections, their associated ethical issues, and their importance in the development of forensic anthropological research, education, and training in the US. Anatomical Collections As already stated, the development of physical anthropology in the US is associated with the establishment of documented skeletal collections and the study of skeletal variation in the 19th and early 20th centuries.These anatomical collections are still employed to this day, including in forensic science.To understand their creation, we have to look into the history of medical education, related legislation, and stance towards the dead in the United States.In late 18th and early 19th centuries, anatomists influenced by the practices of European doctors resorted to grave robbers due to the high demand for cadavers to teach anatomy; this led to populace uprising against the grave robbing of cadavers for dissection [31,32].In 1831, Massachusetts passed the first Anatomy Act, granting legal access to unclaimed cadavers for dissection to help protect burials against grave robbing [32].The other States followed suit by implementing their own Anatomy Acts [14].The Anatomy Acts opened the legal path for the establishment of documented skeletal collections from unclaimed dissected individuals. The conceptualization behind the origins of documented skeletal collections is correlated with the mentoring relationship among different generations of anatomists and anthropologists [10].Robert J. Terry started the R. J. Terry Anatomical Collection in 1910, influenced by his mentors George S. Huntington and Sir William Turner [23,33].Between 1893 and 1921, at Columbia University, Huntington collected between 7000 and 8000 human skeletons from unclaimed individuals [15].As Huntington's health declined, bone elements from the collection were traded or gifted to other institutions [15].Approximately 3070 partial skeletons that remained from the Huntington Collection are now housed at the National Museum of Natural History [14,15].Huntington believed in a separate analysis per bone for racial and morphological studies [14].Terry, shaped by Huntington's teaching, collected skeletons from dissections to research normal and pathological variation in the human skeleton at the Washington University in St. Louis [23].After Terry's retirement in 1941, Mildred Trotter expanded the collection to 1728 skeletons, until her retirement in 1967.Trotter focused on increasing the number of skeletons of white females, which were lacking in the collection because of the scarcity of female cadavers [33].Currently, the Robert J. Terry Anatomical Collection is curated at the National Museum of Natural History. Influenced by Thomas W. [10,14,34].Todd expanded the Hamann-Todd Collection to over 3100 human skeletons [14].T. Wingate Todd's views differed from the mainstream research in morphological and racial variation.Todd believed that race, as a proxy for ancestry, was not the sole determinant of human skeleton biological variation, but environmental and social parameters would likewise affect growth and ageing [14,35,36].Cobb, after his PhD in 1932, assembled over 970 skeletons from dissections, while taking a biocultural approach on the socioeconomic influence in morbidity and mortality [14,37,38].As the first African American physical anthropologist, Cobb aimed to empower African American scholars on matters of race and human biology research, and improve health care [14,38,39].The collection has fewer individuals than those assembled by Cobb (n = 970).Muller and colleagues state that "Due to improper storage and disuse, the number of individual skeletons is now reduced to approximately 680." [14] (p.193); however, they offer no further explanation for this fact [14]. A decline in the supply of unclaimed cadavers started in the 1930s and intensified in the following 30 years [40].The decrease was associated with welfare legislation and an improvement in the quality of life in the US [23,40].During this period, although prejudice against dissection was prevalent, some people donated their bodies, or those of family members [40].These reforms affected the anatomical collections, although efforts of amassing skeletons were still ongoing, especially in regard to the Terry Anatomical Collection and Cobb Human Skeletal Collection [14,23]. Modern Documented Skeletal Collections In 1968, the Uniform Anatomical Gift Act (UAGA) standardized the anatomical laws across the US [40].With UAGA, the body gained the status of property, thus allowing individuals to leave their body for science and/or transplants after death in their will [40], opening the legal path for the establishment of the modern documented skeletal collections with body donation programs.The anatomical skeletal collections were formed in a medical context concerning skeleton variation, while modern collections are linked with the study of human decomposition in forensic sciences.In 1980, the first human decomposition facility was created by William M. Bass, a forensic anthropologist at the University of Tennessee [29].The first donation arrived in 1981 [41].The Forensic Anthropology Center's mission was to lead research in human decomposition, advance forensic anthropology, train and educate, and provide consulting services [7].Another purpose of the Forensic Anthropology Center was to produce a large collection of modern documented skeletons, the William M. Bass Donated Skeletal Collection [29,41].In the US, body self-donations for transplants, research and education picked up at the end of the 20th century [40].However, initially the William M. Bass Donated Skeletal Collection was chiefly composed by unclaimed individuals from medical examiners and state donations [41].Subsequently, the facility changed its policy and currently only accepts body self-donations, or by legal next of kin [7].The 1994 novel "The Body Farm", by Patricia Cornwell, and popular forensic television shows in the early 2000s were major game-changers in the rise of body donations at the University of Tennessee [7,41].However, it is likewise rooted in a bigger acceptance of body donation and dissection among the American population [40], especially European-Americans.The William M. Bass Donated Skeletal Collection has over 1800 skeletons [42], comprising a higher number of older European-American males than females, or individuals of other self-reported racial groups [8]. Following Not all modern collections are associated with a taphonomic facility to study decomposition, such as the Maxwell Documented Collection from the University of New Mexico, and the Boston University Donated Osteological Collection [12].The Maxwell Documented Collection was created in 1975 by Stanley Rhine, whose donations came from self-donors, legal next of kin, the Department of Anatomy from the University of New Mexico, and the Office of the Medical Investigator [22].In 2008, 15% of the individuals had no documentation about their donation source [22].Donations to the Boston University Donated Osteological Collection are used for the education of students and for law enforcement [12]. The Research Value of Documented Skeletal Collections Throughout the history of forensic anthropology, the development of methods of analysis closely relates to the establishment and research availability of documented skeletal collections.Our early pioneers recognized that such collections were vitally needed to place the emerging science of forensic anthropology on a solid foundation, and to document variation [44].As previously stated, this need led William Montague Cobb, Aleš Hrdlička, Robert J. Terry, Thomas Wingate Todd, and Mildred Trotter [1] The Research Value of Documented Skeletal Collections Throughout the history of forensic anthropology, the development of methods of analysis closely relates to the establishment and research availability of documented skeletal collections.Our early pioneers recognized that such collections were vitally needed to place the emerging science of forensic anthropology on a solid foundation, and to document variation [44].As previously stated, this need led William Montague Cobb, Aleš Hrdlička, Robert J. Terry, Thomas Wingate Todd, and Mildred Trotter The practice of forensic anthropology requires the accurate estimation of sex, age at death, living stature, and ancestry, as well as recognition of pathological conditions and anatomical variation that can contribute to positive identification.Such estimates rely on published and accepted methods that reflect the errors and probabilities involved.Much of the data leading to the development of those methods have been gleaned from documented skeletal collections. As noted decades ago by Dwight [45] and Stewart [44], methods in forensic anthropology must consider human variation, and for this reason the emphasis on the study of human remains is distancing itself from racial typology and focusing on human variation [6].With this in mind, documented collections with known information on age at death, sex, living stature and other variables that allow osteological methods to be developed are being used with a new perspective.Data on human variation have allowed for the development of more complex, and accurate models of sexual diagnosis and age at death esti- The practice of forensic anthropology requires the accurate estimation of sex, age at death, living stature, and ancestry, as well as recognition of pathological conditions and anatomical variation that can contribute to positive identification.Such estimates rely on published and accepted methods that reflect the errors and probabilities involved.Much of the data leading to the development of those methods have been gleaned from documented skeletal collections. As noted decades ago by Dwight [45] and Stewart [44], methods in forensic anthropology must consider human variation, and for this reason the emphasis on the study of human remains is distancing itself from racial typology and focusing on human variation [6].With this in mind, documented collections with known information on age at death, sex, living stature and other variables that allow osteological methods to be developed are being used with a new perspective.Data on human variation have allowed for the development of more complex, and accurate models of sexual diagnosis and age at death estimations based on human osteological remains.These methods emphasis a statistical approach to human variability, biological sex diagnosis and age at death assessment.They also provide data on associated errors and probabilities, as well as information on a method's accuracy; the information has become more biologically robust, and less subjective.Furthermore, reliable methods must be based upon sufficiently large sample sizes.Thus, documented collections must be sizeable enough to provide meaningful samples when divided by sex, age at death, or other variables.Consequently, many collections tend to have more than 200 individuals or anatomical elements.In many cases, some collections continue to incorporate new individuals/elements aiming to increase variation representativity.There are however some limitations to the enlargement of the collections. Historically, within the United States, documented collections reflect the regional efforts of professionals.As such, each collection does not represent the United States population at large, but rather a geographical and temporal subsample.Local laws and regulations, and the scientific interests of the individuals assembling the collection, shaped the demographic characteristics of individuals in each documented sample.For example, impoverished and marginalized individuals from the 19th century comprise some of the most important anatomical collections of the US, not reflecting the modern US population.Since the 19th century, secular changes in the cranial morphology and limb proportions of Americans have been recorded [46][47][48][49][50], thus making the anatomical collections challenging for establishing and refining osteological methods for forensic research [45].Modern collections do not represent the diversity of the living American population either [22].For example, Godde [51] plotted survivorship curves of data derived from the William M. Bass Donated Skeletal Collection, a cemetery from the same county as the individuals from the collection, the census data for deaths from Knox County in Tennessee, and the US census data for deaths across the country.The William M. Bass Donated Skeletal Collection plotted survivorship differed from the three other sources of data, showing that the collection's age-at-death profile is not representative of the US population [51].The William M. Bass Donated Skeletal Collection is also predominantly composed of older European American males [8,29], although sex demographics will even out, as 65% of 4896 pre-donors are female [8], and self-donations of individuals of other self-reported racial groups have increased [7].However, minorities and immigrants are skeptical and/or fearful of donating their bodies to science, due to past unethical and/or criminal practices in the US.Historically, research with the bodies of marginalized people without their consent was vastly employed in medicine and anthropology, including in the formation of documented skeletal collections.Winburn et al. [8] suggested a transparent conversation when relating with African American communities about forensic anthropological research and body donation.Winburn also recommended explaining how their donations may benefit their communities, and how forensic sciences do have a social duty and role in American society.Communication with minorities should as well involve the collaboration of scholars and students, family and pre-donors, and religious and community leaders from diverse groups [8].Yet, community involvement should be a long-time commitment by forensic anthropologists.Research performed with the donors should be periodically reported back to the community to which the individual belonged while living.Institutions that curate the modern skeletal collections, and that are aiming for more diverse demographics, should also consider how they can support and finance the studies of people from marginalized communities to be part of the discourse regarding body donation, their rights as minorities and ethics in forensic research. Sample profile and subsequent interpretative analysis may also be influenced by the reduced population variation representativity, and the limited number of individuals from diverse socioeconomic backgrounds in the modern collections.The quantification of socioeconomic variables in the assessment of human skeletons needs to be considered by those developing sex and age-at-death assessment methods.Research has proven that a significant correlation exists between bone development and socioeconomic status [52,53].Documented collections allow control for some of the bone variability due to socioeconomic contexts relating to the individuals' known bio-history, further contributing to their value within forensic anthropology.For instance, if identification standards are developed from a subset of the American population, those may not provide the best results when used in individuals from a different population group. Gradually, research use of these collections has recognized those, and other limitations.With the aid of modern computers and statistical procedures, such recognition has led to the development of metadata and large databases using multiple collections.Such databases include the Forensic Data Bank, which compiled metric and non-metric information of individuals from the documented skeletal collections and modern forensic cases within the US [46].The Forensic Data Bank aims to support the development of osteological identification standards to be applied in forensic cases [46].Alongside dry bone databases, isotopic databases are also being compiled, as is the case with the Forensic Isotopes Nation Database (FIND) created by Herrman et al. [54].The FIND is a repository of isotopic data from individuals with known residential histories from the documented skeletal collections and resolved forensic cases in the US.FIND provides forensic anthropologists a comparative isotopic database with individuals of known residence.Imaging databases are also under development, and are being increasingly used in forensic anthropology for the identification of unknown individuals [55,56], as well as teaching tools.To date, 500 skeletons from the William M. Bass Donated Skeletal Collection have been scanned with computerized tomography (CT) for anthropological and biomedical research [29], and a larger CT-scan database-the New Mexico Decedent Image Database (NMDID)-has been developed.The NMDID was created between 2010 and 2017 at the University of New Mexico, and is composed of whole-body CT scans and metadata of residents in New Mexico with known biographical data, and information on health and circumstances of death, collected from autopsies and interviews with next of kin [57]. Recognition of the limitations of the classic collections in the United States and Europe has also led to the formation of documented collections in other parts of the world [58].Methods developed within specific countries have limited applications elsewhere, especially in reference to estimates of ancestry, living stature or other morphometric features.The global growth of interest in forensic anthropology and its application has stimulated colleagues, especially in Latin America and Europe, to develop documented collections that are more relevant to local casework.These new collections supplement those from the United States in providing key evidence of human variation from different time periods. The Educational Value of Documented Skeletal Collections Documented collections also present training opportunities.Most forensic anthropologists rely on accepted, published methods in their casework.Indeed, the legal system calls for nothing less [59].However, seasoned anthropologists and those in training benefit from testing their skills on documented collections.Such practice reveals the nuance of application and provides opportunities to examine the remains of individuals different from those they are most familiar with.Forensic anthropologists preparing for practical certification examinations find this experience particularly useful.Forensic anthropology is regarded as a subfield of biological anthropology that contributed to the non-standardized courses in higher education [60].The Forensic Anthropology Facilities yield a valuable opportunity for students and professionals to be educated and trained on field recovery, forensic taphonomy, and human identification, with human cadavers and documented skeletons, a resource not available in most higher education institutions in the United States.The University of Tennessee offers its students training through simulation of forensic field experiences in the recovery of human bodies [7].Students are also granted the opportunity to clean and label the human skeletons within the facility curation and body donation program [7].Through public lectures and internship programs in forensic anthropology for local high school students, the center also provides a return to the community whose family members compose the William M. Bass Donated Skeletal Collection [7,29]. As a final note, it is also necessary to acknowledge that with technological developments alongside dry bones-based collections, virtual collections based on 2D and 3D models have been used as well as other imaging reconstructions of human bones.During the 2020/2021 COVID-19 pandemic lockdown, this availability of virtual human remains was a major teaching resource, as opposed to a hands-on approach with dry bones.That transition has become a driving force in the greater use of 2D and 3D models of human remains for teaching. Historically, documented collections of human remains have provided the foundation for research and training in forensic anthropology.Most of our current methods can be traced to research on these collections.Today, these collections are supplemented by clinical data, especially those derived from radiology and related imagery.Modern research values these collections but with enhanced focus on their limitations. Ethical Concerns on the Inclusion of Skeletons of Unclaimed Individuals in the Documented Skeleton Collections The Anatomy Acts, implemented before UAGA, assured that the source of cadavers for research and teaching came exclusively from the most vulnerable sector of the population.Anatomical laws targeted the impoverished, with the reasoning that poor individuals would pay their debt to society with their body, in service of science and education [15].Dissection was a stigmatized practice at the time and perceived as capital punishment [32].Therefore, the Anatomy Acts were also a means of social control against indigence [32].Lawmakers could, in this way, protect the white middle class from being dissected, and guarantee a legal supply of bodies for medical schools [15].Economically vulnerable individuals, without a support system, and whose voice was ignored in the matter, were the most likely to be a source of bodies [14].The major sources of unclaimed cadavers were poorhouses, hospitals, morgues, prisons, long-term care facilities, and mental institutions, which guaranteed those institutions could avoid funeral costs [14,31,40].Therefore, the skeletons of criminals and unclaimed individuals were accumulated from dissection practices to form the documented collections.Nystrom [32] argued that the establishment of the anatomical collections was based on structured violence against marginalized individuals.In fact, impoverished and marginalized individuals represent the vast majority of the individuals collected, especially African Americans, European immigrants, and individuals that partook in the Great Migration [14].For example, 52% of the Huntington Collection is derived from immigrants, and 43% is composed of African American and Euro-American impoverished residents in New York [15].Ethical concerns for the curation and research of unclaimed African American skeletons in the United States are growing.Dunnavant et al. [30] have called for the creation of an African American Graves Protection and Repatriation Act (AAGPRA) based on the Native American Graves Protection and Repatriation Act.With AAGPRA, Dunnavant et al. [30] argued it would guarantee the protection of graves and ensure the proper curation or repatriation of unclaimed skeletons of African Americans.The proposal would not prohibit osteological research of African Americans, but it would have to be performed ethically, respecting their dignity with the consent of descendants [30]. Conclusions Documented skeletal collections have been an important resource for the establishment of forensic anthropology in the United States.The value of the documented skeletal collections lies in the biographical and metrical data associated with them.Those collections of known identities have allowed the establishment and refinement of osteological methods to aid in the identification of unknown individuals.Documented collections have also been a vital resource in teaching and training students and professionals in forensic anthropology and field recovery.In the United States, the older documented collections were assembled in the 19th and early 20th century, mostly from unclaimed individuals, for anatomical and anthropological studies.Modern collections were assembled in the late 20th century or 21st century, a process that is still ongoing.Modern skeletal collections are assembled through body donation programs associated with human decomposition research facilities.The anatomical skeletal collections do not reflect modern Americans, as secular changes in skeletal morphology and size have occurred since the 19th century.However, modern collections do not reflect the present skeletal variation, as they represent one subset of the population.The lack of diversity in the documented skeletal collections can have a negative impact on osteological methods.To overcome the limitations of the documented skeletal collections, national databases have been created.Databases such as the Forensic Data Bank, with compiled metric and non-metric information, and the Forensic Isotopes Nation Database carry data from individuals from the documented skeletal collections and forensic cases conducted in the U.S. Ethical discussions surrounding the curation and research of unclaimed African Americans are growing.While the documented skeletal collections continue to play a key role in the professional development of forensic anthropology, the ethical discussions happening among scholars and collection-related communities will forge new paths in how research with and about the collections is carried out.This latter point will certainly address creation and curation issues, incorporating not only physical collections but also virtual collections. , among others, to assemble collections of human skeletons with detailed information about the individuals represented in those collections.Despite the recent emergence of ethical concerns, these collections have enabled much of the research development of forensic anthropology over the past century.The William M. Bass Donated Skeletal Collection, the Robert J. Terry Anatomical Collection and the Hamann-Todd Osteological Collection are amongst the best-known US collections.A quick search on the Scopus online database, using the names of the collections as search keywords, revealed 226 manuscripts published between 1963 and 2021 (November), with most of the articles (n = 110) being published between 2010 and 2021.Although the subject areas in which these manuscripts were published were varied (e.g., social sciences, medicine, biochemistry, genetics, molecular biology, arts, and humanities), these mostly appeared in the American Journal of Physical Anthropology (n = 59) and the Journal of Forensic Sciences (n = 50), illustrating the importance of the collections within these research/subject areas.Many of the articles are related to the development and testing of age and sex assessment methods, once more highlighting the impact these collections have had in the development of this discipline.A word count analysis of the manuscript titles and author keywords are examples of this (Figure 1). Figure 1 . Figure 1.Word count analysis of the (a) manuscript titles (b) and author keywords. Figure 1 . Figure 1.Word count analysis of the (a) manuscript titles (b) and author keywords. Author Contributions: Conceptualization, V.C. and F.A.C.; data curation, V.C. and F.A.C.; writingoriginal draft preparation, V.C., F.A.C. and D.H.U.; writing-review and editing, V.C., F.A.C. and D.H.U.; funding acquisition, F.A.C.All authors have read and agreed to the published version of the manuscript.Funding: Francisca Alves Cardoso research is funded by the research projects Bone Matters/Matérias Ósseas (IF/00127/2014/CP1233/CT0003/ funded by FCT/Portugal), and Life After Death: Rethinking Human Remains and Human Osteological Collections as Cultural Heritage and Biobanks (2020.01014.CEECIND/funded by FCT/Portugal); and NOVA FCSH 6ª Edição do Financiamento Exploratório para Projetos Internacionais-Bones Digital Footprint: Insights from Scientometrics and Social Media Analysis (BoDiPrint).This research is also within the scope of CRIA-Centro em Rede de Investigação em Antropologia (UIDB/04038/2020) Strategic Plan.Institutional Review Board Statement: Not applicable.Informed Consent Statement: Not applicable. Table 1 . The documented skeletal collections in the US. Todd, William Montague Cobb established the W. Montague Cobb Human Skeletal Collection between 1932 and 1969 at Howard University, Washington DC [14].Cobb was a PhD student of Todd at the Western Reserve University, now Case Western Reserve University, which curated the Hamann-Todd Osteological Collection.Currently, this collection is housed at the Cleveland Museum of Natural History.Although started by Carl A. Hamann in 1893, who collected over 100 skeletons from unclaimed cadavers, the biggest propeller was Todd between 1912 and 1938 the model established at the University of Tennessee, six other forensic anthropology facilities were created at Western Carolina University (2006), Texas State University (2008), Sam Houston State University (2009), Southern Illinois University, Carbondale (2012), Colorado Mesa University (2013), and the University of South Florida (2017) [7-9].New forensic anthropology research facilities are being planned in the US [43].According to Vidoli et al. [7] (p.464) the Anthropology Research Facility of the University of Tennessee "has become a source of pride and recognition in the community".However, that is not always the case and the proposal of building a new human decomposition facility can meet community opposition [7].
2021-12-17T16:50:49.745Z
2021-12-15T00:00:00.000
{ "year": 2021, "sha1": "9afe4d5c2a3f0b98a799b4da5fd3c0ef1661951b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2673-6756/1/3/21/pdf?version=1639574287", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "4834917f1870fc01f29ca744782f097e2eb1e0e9", "s2fieldsofstudy": [ "History", "Law" ], "extfieldsofstudy": [] }
263979377
pes2o/s2orc
v3-fos-license
Brief clinical evaluation of six high-throughput SARS-CoV-2 IgG antibody assays Highlights • The automated immunoassays showed a higher sensitivity than the ELISA based assays.• The assay using the S and N protein as antigens showed the highest sensitivity.• There were differences in the immune response (targeting SARS-CoV-2 S/N antigen).• The titers generated with the examined assays correlated well with the PRNT. . In addition, the SARS-CoV-2 serostatus of asymptomatic individuals or patients with mild clinical course, who present late (a couple of weeks) after infection, is of interest. Ideally, a positive IgG status will offer a potential immunity, but if so, questions on how long it will last, still remain. Furthermore for therapeutic or prophylactic approaches, convalescent plasma may be used as vaccines and other drugs are under development (3). For these purposes, sensitive and especially highly specific antibody assays are needed. The spike (S) protein of SARS-CoV-2 has shown to be highly immunogenic and is the main target for neutralizing antibodies (4). Currently there are different spike (S) and/or nucleocapsid (N) proteinbased commercially or in-house developed assays available, but there is limited data on how these tests perform with clinical samples. This study aims to provide a quick overview on some of these assays (two commercially available ELISA assays, four automated immunoassays and a plaque reduction neutralization test (PRNT)) focusing on the detection and neutralization capacity of IgG antibodies in follow up serum or plasma samples of individuals with PCR-diagnosed infections with SARS-CoV-2. When calculating the overall sensitivity we used the total time frame of 49 days after first PCR-positivity and focussed on the different antigens (S-or N-antigen) used as binding antigen(s) in the assays. Typically, the majority of J o u r n a l P r e -p r o o f Serum and plasma samples We collected follow up serum or plasma samples (in the following simply stated as samples) from individuals with PCR-diagnosed infections with SARS-CoV-2 (n=45) (TABLE S1) (TABLE S2). The non-SARS-CoV-2 samples were used to assess potential cross reactivity and the risk of potential false positive results. Immunoassay platforms Samples were tested within one day, in batches, on multiple commercially available (mostly automated) immunoassay platforms (TABLE 1) according to the manufacturers' protocol. ELISA The Euroimmun SARS-CoV-2 IgG ELISA (Euroimmun, Lübeck, Germany) and Virotech SARS-CoV-2 IgG ELISA (Virotech Diagnostics GmbH, Rüsselsheim, Germany; TABLE 1) were used, in an identical manner, according to the manufacturer's recommendation. Samples were diluted 1:101 in sample buffer and incubated at 37° for 60 or 30 mins, respectively, in a 96-well microtiter plate followed by each protocols' washing and incubation cycles, including controls and required reagents. Optical density (OD) was measured for both assays at 450 nm using the microplate reader of a VIRCLIA® automation system (Vircell Spain S.L.U., Granada, Spain). The signal-to-cut-off ratio was calculated and values expressed according to each manufacturer's protocol. For these purposes, there is a demand for (cost-effective) high-throughput assays, which can be automated and used for large sample sizes. The sensitivity of these assays depends on the used assay and moment of testing in the infection phase (low sensitivity a couple of days after infection vs. higher sensitivity a couple of weeks after infection (5,6). The commercially available assays examined in our study, generated consistent results regarding the detection of SARS-CoV-2-IgG antibodies. The sensitivity (without differentiating the timepoint of sampling) varied within the group of assays using the same antigen as target for the antibodies. While the majority of antibodies are typically produced against the N-protein (which therefore might be the most sensitive target protein), antibodies produced against the S-protein are expected to be more specific and potentially neutralizing. In the group of N protein-based assays the sensitivity varied from 66.7 to 77.8% and in the S proteinbased assays from 71.1 to 75.6%. This might be due to differences within the used recombinant antigen and/or is a system-inherent feature. The dual target (S1 and N protein-based) assay for the Vircell VIRCLIA® automation system and the PRNT demonstrated the highest sensitivity with 89% and 93.3%, respectively. There is a large discrepancy in the determined sensitivities for the assays examined in our study to the sensitivities according to the manufacturers' specifications and the data described in literature. This is not because of the small examined sample size, but because overall sensitivities (not differentiating between the time-points after positive PCR-testing) were given in this study. This was done for a better comparability of the examined assays in terms of demonstrating the differences in the used antigens of the assays on its ability to detect antibodies, independent from the time point of sampling. As gold standard, the PRNT is hands on-and time-intensive and can J o u r n a l P r e -p r o o f only be performed for smaller sample sizes in a BSL-3 laboratory. However, it is capable to detect neutralizing antibodies. In our study, the antibody titers generated with the commercially available assays correlated well with the PRNT titers. The mechanism of immunity, especially of protective immunity (if applicable) and how long it will last, need to be further investigated. A titer needed for potential protective immunity is not yet (officially) defined. Besides humoral mediated immunity, there is evidence that T-cell mediated immunity plays a role (7). Interestingly, in samples of three individuals with mild clinical course of COVID-19, examined in our study (1, 2, 3 in The automated immunoassays demonstrated a higher overall sensitivity than the ELISA based assays. Especially the assay using the S and N protein as antigens showed the highest sensitivity within the group of commercially available assays examined in this study (including samples with individual characteristics). The titers generated with these assays correlated well with the PRNT, demonstrating the neutralizing capacity of detected antibodies. Because of the low prevalence of SARS-CoV-2 at the moment, these assays are currently primarily eligible for epidemiological investigations, as they are only of limited informative value in individual testing. Authors' contributions NK and HR designed the study. NK, CR and SW performed experiments. NK, HR and SC analyzed data. NK and HR wrote the manuscript. All authors discussed the results and commented on the final manuscript.
2020-06-02T21:05:08.962Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "90d752c78d598ebebe33d01db4c0b2f87fe6e055", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.jcv.2020.104480", "oa_status": "BRONZE", "pdf_src": "ElsevierCorona", "pdf_hash": "90d752c78d598ebebe33d01db4c0b2f87fe6e055", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235390512
pes2o/s2orc
v3-fos-license
Super-Resolution Image Reconstruction Based on Self-Calibrated Convolutional GAN With the effective application of deep learning in computer vision, breakthroughs have been made in the research of super-resolution images reconstruction. However, many researches have pointed out that the insufficiency of the neural network extraction on image features may bring the deteriorating of newly reconstructed image. On the other hand, the generated pictures are sometimes too artificial because of over-smoothing. In order to solve the above problems, we propose a novel self-calibrated convolutional generative adversarial networks. The generator consists of feature extraction and image reconstruction. Feature extraction uses self-calibrated convolutions, which contains four portions, and each portion has specific functions. It can not only expand the range of receptive fields, but also obtain long-range spatial and inter-channel dependencies. Then image reconstruction is performed, and finally a super-resolution image is reconstructed. We have conducted thorough experiments on different datasets including set5, set14 and BSD100 under the SSIM evaluation method. The experimental results prove the effectiveness of the proposed network. I. INTRODUCTION Single image super-resolution reconstruction (SISRR) is an important research direction in the field of computer vision. Super-resolution reconstruction aims to reconstruct the corresponding high-resolution image from the observed low-resolution image to obtain richer image detail information, which has been utilized in medical imaging [1], [2], monitoring equipment [3] and other fields [4]. The main task of SISRR is to determine the mapping function between low-resolution image and high-resolution image, and reconstruct the high-resolution image corresponding to the low-resolution pictures. With the development of convolutional neural network in the image super-resolution, SISRR techniques have been developed rapidly. Among them, SRCNN [5] is the first technique to use convolutional neural network to process super-resolution image reconstruction, which solved the ill-posed problem caused by the traditional method of learning mapping function between low resolution and high resolution. With the gradual rise of Deep Learning, the newly designed structures based on deep networks have achieved good results by extracting much more features from the original pictures. However, deep networks require relatively long training time while many detailed information which has been extracted by the complex deep network cannot be made full use of. In this paper, we proposes a self-calibrated convolutional network structure based on a generative adversarial networks in order to make full use of the extracted features. The generator structure consists of two parts: feature extraction and image reconstruction. Feature extraction introduces a self-calibrated convolutional neural network [6] to fully extracts the features of the low resolution image. Firstly, a multi-scale method is adopted to the feature fusion module. The fusion results are the input of the image reconstruction part, while the reconstruction result are the output. Compared with other methods such as spatial pooling and attention [7], [8], [9], which are utilizing the convolution operation for extracting long-range dependencies, the method in our paper divides the learnable convolution kernel into four parts. By doing this, the new kernel can not only adaptively encode the context information in the long-range region as well as the spatial position feature of the pixel in the image efficiently, but also obtains the dependence between the channels in each spatial position. This method can be simply embedded in a common convolutional neural network [10] without adding any hyper-parameters. In addition, super-resolution image reconstruction usually uses mean squared error(MSE) [11] as the loss function. However, MSE is highly sensitive to large errors and has limited processing at the pixel level of the image, which may make the reconstructed image be over-smoothing. Therefore, some scholars proposed to use structural similarity index(SSIM) [12] instead of MSE as the loss function. SSIM mainly focuses on the brightness, contrast, and structure, etc, and is mainly used to evaluate the similarity of two image. This paper adopts an adaptive robust loss [13], which learns hyper-parameters independently, and reduces the workload of manual tuning. The function form is not only limited to MSE, but also includes L1 loss, L2 loss, and various loss functions Combines, which is conducive to network training with good robustness. In order to alleviate the above mentioned problems, this article mainly made the following contributions: • The self-calibrated convolutional network is first time to be introduced in the feature extraction part of the generator, and it can be used to reconstruct the super-resolution task to fully extract the image features and make full use of the detailed information. II. RELATED WORK A. Super-resolution image reconstruction based on convolutional neural network The method in this paper mainly uses convolutional neural networks, including shallow neural networks and deep neural networks, to learn the mapping relationship between low-resolution image and high-resolution image. SRCNN [5] is the first to use CNN as the mapping function. In order to achieve the image reconstruction, the bicubic interpolation is performed before the feature extraction. This expands low-resolution images at the beginning, which increases the cost of network calculation. FSRCNN [15] is an improved version of SRCNN. The activation function Relu is replaced by Relu with parameters, which is more conducive to network training. Low-resolution images are directly input into the network, and the image reconstruction is performed at the end of the network. This process solves the problems of SRCNN of which the computational complexity is overwhelming. However, it is not yet able to fully extract the features of the image. With the development of deep learning, the number of network layers of the newly emerged structures continues to increase. The lapSRN [16] utilizes the Laplacian pyramid structure to perform deconvolution operations on each layer of the pyramid in order to achieve up-sampling. This method exploits the nature of Laplacian pyramid to predict the sub-band step by step, and are applied with the cascaded convolution operation to extract the features. DRCN [17] is the first technique uses the recurrent convolution network for super-resolution image reconstruction. The idea is to determine the appropriate number of recurrent convolution layers to prevent gradient disappearance and gradient explosion in order to extract the high frequency of the image information. VDSR [18] is based on an improved 20-layers VGG network, of which taking into account the network convergence slowly caused by network deepening. A cascaded convolution kernel is adopted to learn the highfrequency residual information between low-resolution and high-resolution images. This method requires to be combined other training techniques, such as the gradient clipping strategies. EDSR [19] improves the residual network structure and increases the network depth while removes the unnecessary modules in the network. The deep network solves the problem of instability in training with some certain advantages, but it is still suffered by the increase of the occupation of resources. EDSR is a single-scale extraction of high-frequency information, while VDSR is a multi-scale extraction of high-frequency information. Both of them needed bicubic interpolation to process the input. In summary, although the reconstructed super-resolution image is relatively fuzzy, the shallow network is lack of the ability of fully extracting the features. On the other hand, the Relu with parameters and the deconvolution method have been widely applied with the subsequent super-resolution image reconstruction in researches. Even if the deep convolutional network can extract more information, it cannot make full use of the extracted feature information. In order to solve the problems of complicated calculations caused by deepening of the network, Wang et al. [20] proposed to use the ordinary differential equations to guide network design. Compared with the above works, the self-calibrated network proposed in our paper does not require additional parameters, and the image features can be effectively extracted and fully utilized. Our proposed method can achieve the same effect as a complex networks, and can be embedded into different tasks. B. Super-resolution image reconstruction based on generative adversarial network Generative Adversarial Network (GAN) [21] is proposed by Goodfellow in 2014. It includes a generator and a discriminator. The generator is used to generate diverse samples, and the discriminator was essentially composed by a two-classifier structure in order to identify image samples whether the generated images are real. The generator and the discriminator perform a min-max game, and finally the generator generates samples with diversity and authenticity. DCGAN [22] is a popular method which introduces the Relu with paramters and deconvolution methods into the GAN structure. MAD-GAN [23] proposed a multi-generator and single discriminator structure, and the parameters are shared among multiple generators. Relevantly, the discriminator also made corresponding changes to match the generator changes. The model is mainly designed to prevent to occuring the problem of mode collapse. DGAN [24] used MAD-GAN for super-resolution image reconstruction, and obtained best results. SRGAN [14] is based on a deep residual network module to obtain the context information of the image. The utilization of the jump connections increases the complexity of the network. The multi-scale generative adversarial network [25] is proposed as a improvement method based on SRGAN. Compared with SRGAN, this method has a deeper network structure with more numbers of layers but unsatisfied results. C. Loss function In the network training process, choosing a suitable loss function can help the network converge quickly and better learn the distribution characteristics of the data. The L1 loss was the sum of the absolute value of the difference between the predicted value and the target value in the regression task. The loss function The disadvantage was that the position of the center point was not derivable. In the training of the deep neural network, the objective function took the derivative and then propagated back, but this function was not good for the calculation and solution. MSE [11] was to calculate the sum of squares between predicted value and the target value. The use of the MSE loss function in the super-resolution image reconstruction task caused the reconstructed image to be too smooth, and the human visual effect was poor. So some scholars proposed to used SSIM [12] as the loss function instead of MSE , SSIM is usually used as an evaluation method in super-resolution reconstruction tasks to evaluate the similarity of reconstructed image in brightness, contrast, structure, etc. It obtained certain effects. SRGAN [14] proposed a perceptual loss function based GAN structure. The perceptual loss function included content loss and adversarial loss. The content loss included the use of MSE to calculate the similarity between the reconstructed image and the real image and the similarity between the high-level features of the image by VGG network. MSE accounted for the dominant content loss. Therefore, the above-mentioned reconstructed image still appeared too smooth. III. METHOD A. Problem definition The main purpose of our research is to reconstruct a single low-resolution image into a super-resolution image using a generative adversarial network, which is defined as follows: the real high-resolution image I HR is expressed as rW*rH*C, and C is the number of image channels; the input image is low-resolution image I HR expressed as W*H*C, which is obtained by down-sampling the real high-resolution image I HR ; and the reconstructed super-resolution image I SR is expressed as rW*rH*C. Using the generative adversarial network to learn its mapping function, the reconstruction result expresses as I SR = G(I LR ). The GAN objective function is modified as: Among them, D is the discriminator, G is the generator. B. Method structure A 3*3 convolution kernel is usually chosen for feature extraction in computer vision applications. In order to extract more detailed information from the picture, it is possible to choose a 5*5 convolution kernel instead of a 3*3 convolution kernel to increase the receptive field, which increases the network parameters. Also, some researchers have proposed that two 3*3 convolution kernels are equal to a 5*5 con volution kernel, while the parameters are less than 5*5 convolution kernels. In MSRN [26], the authors bulid an MSRN module with the 3*3 convolution kernels and the 5*5 convolution kernel. The two convolution operations in the module extract image features separately, and the extracted features are merged at different scales. This method can effectively extract image features, and the quality of reconstructed image is satisfactory. However, multiple convolution operations increase the network parameters. The generator structure in our method uses a self-calibrated convolution network. It uses a 3*3 convolution kernel to fully extract image features, and uses feature fusion to make full use of the extracted features. Our propose is to reconstruct the overall structure of the low-resolution image method as shown in Figure 1. The input of the traditional generative adversarial network are Gaussian noises. Since our paper uses the generative adversarial network to learn the mapping between low-resolution images and high-resolution images, the input of the generator is changed to a low-resolution image. The generator network does not generate new diversity samples, but reconstructs super-resolution image samples. The input of the discriminator are reconstructed super-resolution images and real images. The output of discriminator is the discriminant results. After iterative training, the generator and discriminator in the model reach the Nash equilibrium, and the reconstructed sample is indistinguishable from the real sample. 1) Generator structure: The GAN structure in our paper is improved on the basis of the structure of SRGAN [14]. SRGAN uses residual network to extract image features. However, its complex network structure does not utilizes the feature sufficiently. The generator network in our method includes two operations: feature extraction and image reconstruction operation. With simplified complexity of the network, it can still extract sufficient features and make full use of them. The feature extraction operation uses a self-calibrated convolutional network [6], which is a set of convolution operations divided into four portions. Each portion of the convolution setting is similar to ordinary convolution but has specific functions. The detailed structure diagram is shown in Figure 2. The process is divided into two modules: The first module contains three portions, which are 1 , 2 and 3 in figure 2 respectively. The first portion is down-sampling and up-sampling to extract the image features. The second portion directly extracts the image features. After that, the results of the two portions are merged. In the third portion, spatial information of different scales can be obtained and the range of receptive fields are enlarged. The second module contains the fourth portion, corresponding to 4 in figure 2. The operation is as follows: The fourth portion directly extracts the features, and its function retains the original spatial context information. Finally, two modules are connected together. In this method, we embed multiple operations in the generator, of which the purpose is to fully extract image features. Putting the reconstruction part after the feature extraction can save training time and resource utilization. The reconstruction part adopts deconvolution operation, and the form is more concise. Therefore, the combined calculation formula of self-calibrated convolution is: Among them, f 1 , f 2 and f 3 represent the convolution operations shown in 1 , 2 and 3 in figure 2, respectively, and up represent the up-sampling operation. The final generator result is expressed as: The adversarial loss of the generator is calculated as follows: Existing super-resolution image reconstruction methods usually use MSE as the content loss function. However, MSE will make the reconstructed image too smooth and the resulting image is too artificial. In order to improve that, we introduce a new adaptive robust loss function [13], which can independently learn the hyper-parameters set in the function while training the network, and it is able to reduce the time of searching the best hyper-parameters manually. The general form of the adaptive robust loss function is: α is a hyper-parameter that controls its robustness, and different values correspond to different loss functions. c is a scale parameter. In order to improve the quality of the reconstructed picture, the loss function of the generator also adds the perceptual loss. VGG is used to extract the high-level features of the picture, and the high-dimensional error between the real picture and the reconstructed high-resolution picture is calculated. The formula is as follows: Hi,j y=1 (Φ i,j (I HR ) x,y − Φ i,j (G(I LR )) x,y ) 2 Φ represents the feature map obtained through the VGG network. In addition, our paper introduces the TVloss regularization into the generator objective function to constrain the difference between adjacent pixels in the picture. The objective function of the generator is: loss G = loss adv + loss f (α,x,c) + loss V GG + loss T V loss loss adv is the generator adversarial loss, loss f (α,x,c) is the adaptive robust loss, loss V GG is the perceptual loss, and loss T V loss is the regular loss. 2) Discriminator structure: The discriminator network draws on the discriminator network structure in SRGAN, as shown in Figure 1. In order to obtain more image feature information, we use 5*5 convolution kernels instead of 3*3 convolution kernels to expand the range of receptive fields. At the same time, we reduce the number of network layers. At the end of the network, a 1*1 convolution kernel is used, which is conducive to the trained model adapting to different sizes of test samples. The objective function of the discriminator network is: The overall objective function of the network is: Among them, the overall objective function not only includes the adversarial loss in GAN, but also includes the various constraints introduced above to jointly promote network training. IV. EXPERIMENT This section first introduces the experimental conditions and training details, then introduces the data set and evaluation indicators, and finally shows the experimental results and analyzes them. A. Introduction to experimental conditions and training details The experiments are conducted on the Windows10 and NVIDIA 2080Ti server. The CUDA version is 10.2. All source programs are written in python language, implemented on the pytorch framework, pytorch version is 1.6. The discriminator uses a 5*5 convolution kernel to replace a 3*3 convolution kernel. The convolution kernel size of the generator is all 3*3. The optimized learning model method used for training is RMSprop(root mean square prop), which the parameter is 0.9. The lr(learning rate) is 0.0005. After reaching a certain number of iterations 20 epochs during the training process, lr turns out to be 0.0001. The batch size is 64. B. Datasets introduction The model training and test datasets are natural images, and the VOC2012 [14] is used for model training and testing. First, preprocessing operations on the training set and test set in the VOC2012 set of the training network. The image are randomly clipped with a size of 128*128, and then down-sampled with a scaling factor of 2, 4 and 8. Datasets such as set5 [27], set14 [28], and BSD100 [29] are used as performance tests. The picture are accessible directly from the trained model, while there are pictures in the set14 data set as grayscale images. The network is trained with 3 channels. C. Evaluation index Our paper evaluates the reconstructed super-resolution images under three evaluation indicators: PSNR, SSIM and MOS. PSNR is often used to compare the differences between corresponding pixels. SSIM [12] is a measure based on brightness, contrast and structure, etc, to compare the similarity between corresponding image. MOS is used to evaluate the real visual observation of image. Above of methods verify the validity of the network structure. The calculation of PSNR needs to calculate the mean square error MSE first, the overall formula is as follows: (I HR − I SR ) 2 P SN R = 10 · log 10 (1/M SE) The calculation formula of SSIM is: SSIM = (2µ I SR µ I HR +C1)(2σ I SR I HR +C2) Where µ I SR is the mean of I SR , µ I HR is the mean of I HR , σ I SR I HR is the covariance of I SR and I HR , and σ 2 I SR and σ 2 I HR is the variance of I SR and I HR . SRGAN [14] introduces the MOS evaluation method into the super-resolution image reconstruction task. MOS is a subjective evaluation method that contains 5 levels. We collect the scores of the reconstructed image from a user study, with 1-2 denotes as poor, 2-3 points are average, 3-4 points are good, 4-5 points are excellent. In our paper, 24 volunteers scored the reconstructed super-resolution images, and then calculate the average result of the score. The above evaluation methods are that the larger the value, the smaller the gap between the reconstructed image and the real image, and the higher the quality of the reconstructed image. D. Experimental results and analysis This section mainly shows the results of super-resolution image reconstruction and evaluation index results of different algorithms, and the analysis of the results. The following is a brief description of the reconstruction algorithms that need to be compared: • A reconstruction algorithm based on Bicubic, which is cubic linear interpolation of low-resolution images. • Based on the algorithm of SRCNN [5]. In this paper, CNN is used for super-resolution image reconstruction for the first time. It is a shallow network. The reconstruction process is divided into three stages, making a great contribution to the neural network processing super-resolution images. • The algorithm based on FSRCNN [15] is also a kind of shallow neural network, which improves the mapping layer and activation function of SRCNN, and uses the deconvolution layer at the end of the network to speed up the training speed. It is better than SRCNN. • The algorithm based on MSRN [26] is a deep network that uses different sizes of convolution kernels to extract features and fuse the extracted features to obtain more detailed information. Although good results are obtained, the network is more complicated. • Based on the algorithm of SRGAN [14] and the GAN structure, the min-max game method is used to train the network. The generator uses the residual network to learn the details of the image. At the same time, a perceptual loss function is proposed, which increases the content loss on the basis of the adversarial loss. Content loss helps guide the reconstruction of super-resolution images with clear details. • Based on the algorithm of multi-scale generative adversarial network [25], it improves SRGAN. The generator uses a residual network with two sub-structures, and performs information fusion to learn more and more detailed information of the image. • In the method of our paper, the generator uses a self-calibrated convolutional network with 3*3 size convolution kernel. Without increasing network parameters, the network structure is simple. The network structure is divided into different parts and each part has different functions. Due to feature fusion, richer detailed information can be extracted and used. The reconstructed super-resolution image has a better effect. The reconstruction results of the above various algorithms are shown in Figure 4. It can be seen from the figure that the algorithm in our paper has a good visual effect and a high similarity to the real picture. Table 1-3 shows the experimental results of different algorithms in different datasets, with using different evaluation indicators when the scaling factor is 4. Comparing the results from Table 1-3, the algorithm in our paper has good results on the evaluation index SSIM. Under the MOS evaluation index, the method in our paper is superior to other algorithms, except for the MSNR algorithm, which proves E. Loss function comparison experiment Our paper introduces an adaptive robust loss function in the adversarial loss. In order to prove the effectiveness of replacing the content loss function MSE with an adaptive robust loss function, we compare the evaluation results of the network on different performance test data sets before and after the replacement. The experimental results are shown in table 4. From table 4, it can be seen that the adaptive robust loss function can effectively improve the quality of the reconstructed image, and the performance is better than the MSE loss function under different evaluation indicators. Meanwhile, it alleviates the over-smoothing reconstruction of the image caused by the MSE loss function. V. CONCLUSION In this paper, we propose a super-resolution image reconstruction method based on a generative adversarial network, and gives a low-resolution image that can reconstruct the super-resolution image corresponding to the low-resolution image. The generator of the generative adversarial network uses a self-calibrated convolutional network can extract the rich detailed information of low-resolution images and make full use of it. It not only expands the feature space scale range, but also maintains the dependency between the extracted feature channels. In addition, the adversarial loss includes adaptive robust loss to help the network stably train, and at the same time alleviates the problem of over-smoothing reconstructed image caused by the MSE loss function. Finally, experiments show that the method in our paper has better human eye observation effects than other algorithms. Compared with other various algorithms, it is superior under the SSIM evaluation index. But there is no significant improvement under the PSNR evaluation index, and further research is needed.
2021-06-11T01:16:16.365Z
2021-06-10T00:00:00.000
{ "year": 2021, "sha1": "c9dde82e255b0b3a290f4760cd68efcc6272e605", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "c9dde82e255b0b3a290f4760cd68efcc6272e605", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
43343883
pes2o/s2orc
v3-fos-license
Quality of Service Provisioning in Biosensor Networks Biosensor networks are wireless networks consisting of tiny biological sensors (biosensors, for short) that can be implanted inside the body of human and animal subjects. Biosensors can measure various biological processes that occur inside the body of the subject under test. Applications of biosensor networks include automated drug delivery, heart beat rate monitoring, and temperature sensing. Since biosensor networks employ wireless transmission, heat is generated in the tissues surrounding the implanted biosensors. Human and animal tissues are very sensitive to temperature increase. Therefore, the generated heat is mitigated by the natural thermoregulatory system. However, excessive transmissions can cause a significant increase in temperature and thus tissue damage. Hence, there is a need for a mechanism to control the rate of wireless transmissions. Of course, controlling the rate of wireless transmissions will lead to Quality-of-Service (QoS) issues like the required minimum delay and throughput. In this paper, we are going to investigate the above issues using the framework of Markov Decision Processes (MDPs). We are going to develop several MDP models that will enable us to study the different trade-offs involved in QoS provisioning in biosensor networks. The optimal policies computed using the proposed MDP models are compared with greedy policies to show their vigilant behavior and viable performance. Keywords—Biosensor networks; Quality of service; Markov decision processes I. INTRODUCTION Biosensors can be implanted inside the body of human and animal subjects to form a biosensor network that can be used for monitoring and observing various biological processes and detect anomalies.No processing is done on the biosensors.Therefore, measurements are transmitted to a Base Station (BS) for processing and recommendation of necessary actions.Biosensor networks can be used in daily medical tasks like sensing body temperature, calculating heart beat rate and automated drug delivery.Biosensor networks are powered by either rechargeable batteries or by continuously transmitting energy to them via electromagnetic waves. Biosensor networks have the same technical challenges introduced by traditional wireless sensor networks.In addition, they introduce new challenges that are unique to them.For example, a major challenge to realizing the full potential of biosensor networks is the heat they generate as a result of power dissipation and wireless communication.Every wireless transmission generates heat.This heat increases the temperature of the tissues that surround the biosensor.The effect of the generated heat is balanced by the human thermoregulatory system.However, excessive transmissions may result in heat that is greater than what can be drained by the thermoregulatory system.If the temperature increase exceeds a certain threshold, the tissues may be damaged.In such a case, the biosensor should be shut down in order for the tissues to cool down and attain the normal body temperature. As a consequence, the maximum safe temperature level that human tissues can withstand becomes an important factor while operating biosensor networks.Hence, there is a need for intelligent thermal management techniques to mitigate the thermal effect on human tissues.Such techniques, for example, would enable long-term monitoring and measurement to be performed.Furthermore, there is a need for a mechanism to optimize the transmission schedule of biosensors to prevent the potential damage to the human tissues and respect the required QoS.All these contradicting challenges need to be carefully and intelligently addressed. Very little work has been done in the area of QoS provisioning in biosensor networks.The main focus has been to minimize the average temperature increase of the system with no consideration for QoS [1], [2], [3].On the other hand, QoS issues such as data loss and late delivery are not studied in the context of temperature-sensitive environments like the ones in which biosesnor networks operate.These two specific QoS issues are studied in this paper using a new model that includes the state of the buffer inside a biosensor.In this way, a more accurate picture of the operation of biosensor networks can be painted. The rest of the paper is organized as follows.Section II provides a survey of the relevant literature.Then, section III describes the newly proposed model.After that, section IV presents the numerical results and several insights.Finally, section V concludes the paper and provides directions for further research. II. RELATED WORK The goal of this paper is to extend the models presented in [1], [2], [3] to include some QoS metrics.The current models consider only power and energy constraints with no regard for the effect of traffic and finite buffer size on the performance of biosensor networks.Hence, in this section, we are going to critique the current models and discuss their shortcomings.For more details about the problem and its context, the reader www.ijacsa.thesai.org is encouraged to read our previous papers and the references therein. Support for various QoS requirements like low packet loss and delay is essential in the development of future wireless networks that employ tiny sensing devices.Several cross-layer optimization techniques have been proposed in the literature to tackle QoS-related issues.For example, the authors in [4] handle the issue of the time-varying nature of the wireless channel by constraining different system parameters like data rate, modulation schemes, and transmission power.The tradeoffs between the average transmission power and average packet dropping probability and the average buffer delay are studied in [5].The authors consider a system with a finite transmission buffer and a time-varying wireless channel.The system is formulated as both a constrained and unconstrained MDP with an average cost criterion. The heating issue in biosensor networks is addressed in [1], [2].The authors optimize the network lifetime under strict temperature constraints by considering different amounts of initial energy.The system consists of biosensor nodes whose wireless transmission affects the temperature level of the surrounding tissues.The system is modeled as a discrete time MDP that grows in discrete time steps.During each time slot, the scheduled sensor undergoes a change in its energy and temperature in accordance with its action.Temperature of the unaffected biosensors is assumed to decrease by a constant value.However, temperature of the affected biosensors increases according to a direct relationship with the biosensor scheduled for transmission and the state of its wireless channel with the base station.The system is solved to obtain an optimal operating policy that maximizes the network lifetime while keeping the system in a safe temperature zone to avoid tissue damages.The results obtained indicate that the optimal policy performs better when compared to several heuristic policies.Figure 1 shows the system used in the study. Optimization of biosensor networks by increasing the number of transmitted samples is addressed in [3].Three actions are considered as shown in Figure 2. The control signals are initiated by the base station which also controls the power source.The model is also formulated as a discrete time MDP Fig. 2: A biosensor can be rechargeable.Recharging biosensors can increase their lifetime but it also increases the temperature of the tissues around them.A biosensor can be put to sleep to cool down.whose state includes the current energy, transmission power, and temperature.The temperature is also used as a strict (i.e., global) constraint.The authors evaluate an optimal policy by solving the system using the value iteration algorithm with an average reward criterion.The obtained optimal policy maximizes the samples which can be transmitted by the biosensor network when compared with greedy and heuristic policies. III. SYSTEM MODEL Figure 3 shows the layout of the system studied in this paper.Only one biosensor node is shown.Each biosensor has its own state.Multiple biosensor nodes share a common wireless channel that connect them to the base station.Each biosensor node contains a finite size buffer for storing the samples generated by the biosensing elements.These arriving samples may experience delay and loss while traveling to the base station .We assume that each biosensor node knows the state of the wireless channel and the size of its buffer.Hence, the state of the biosensor node is made up of three state variables: wireless channel, buffer size, and temperature.Based on the state of the biosensor, the controller should determine an efficient policy that optimizes certain QoS metrics.Basically, in each time slot, the controller decides whether to make a transmission or put the transmitter to sleep.Next, the details of the system model are given. A. Wireless Channel Model We consider a slotted Rayleigh fading channel with Additive White Gaussian Noise (AWGN) N o and channel bandwidth W .The Rayleigh fading channel is assumed to be slowly varying so that the received Signal to Noise Ratio (SNR) remains constant during a single time slot.It is also assumed that transitions are only allowed to current or adjacent states.This slowly varying discrete time Rayleigh fading process can be represented by a Finite State Markov Chain (FSMC) which Fig. 3: System model for a biosensor node with a finite buffer and controller.has K channel states [6].The channel states are numbered from 0 to K − 1.The channel gain for each state c, where c ∈ (0, ...., K − 1), is represented by θ c .The probability distribution for the next channel state during a time slot n is given by P C (c, c ) can be calculated by partitioning the range of channel gains into a finite number of intervals.The information about the fading process given in [7] is used.Further, we assume that the channel state transition probabilities for all channel states are available [8]. B. Buffer State Model Samples generated by the on-board sensing elements are stored in a finite buffer of size β.Let σ n indicate the number of arriving samples at the beginning of time slot n.Samples arriving in time slot n can only be transmitted in the next time slot n + 1. Sample arrivals are Poisson distributed with an average arrival rate equal to λ.They are also independent of the channel fading process.A truncated Poisson process is considered since the number of on-board sensors is finite.This necessitates an upper bound, represented by Z, on the number of samples.It is assumed that the length of each time slot is equal to one time unit.Hence, the truncated Poisson process can be approximated as follows: It should be pointed out that p(σ n = Z) has a large probability due to truncation.In our model, this means that the likelihood that all sensors generate samples in one time slot is high. Let B n be a state variable indicating the number of samples in the buffer at the beginning of time slot n.Then, the number of samples in the buffer in time slot n + 1 is given by where A n is the number of samples transmitted in time slot n. C. Transmission Model The number of samples transmitted in a time slot n is equal to A n which takes values from the set {0, 1, 2, ..., α}.The transmitter is responsible for taking certain number of samples from the buffer and transmit them over the correlated faded channel.Let A = a 0 , a 1 , a 2 , ..., a A indicate the set of actions performed by the transmitter where a 1 indicates one sample is transmitted , a 2 indicates two samples are transmitted and so on.a 0 represents the sleep action;i.e., no sample is transmitted by the biosensor node in this state. Let P (C n , A n ) represent the power required to make action A n in time slot n while the channel state is C n .Power required to take a certain action in slot t must belong to P t (c, a) ∈ P op , where P op indicates the set of power levels supported by the transmitter.Furthermore, we enforce a fixed Bit Error Rate (BER) constraint on all the transmissions done by the transmitter.Assuming an adaptive M-ary Quadrature Amplitude Modulation (MQAM) modulation scheme with ideal coherent phase detection, the power required to satisfy a particular BER can be evaluated by using the following equation from [8]: In ( 5) N o represents the channel noise, E b represents the fixed BER constraint that is satisfied assuming coherent phase detection, θ c represents the channel gain when the channel state is c and W represents the bandwidth of wireless transmission.If the required power is less than that described in (5) , it means that action is not feasible.Power calculated in (5) give a pessimistic estimate of the power required to achieve a certain BER for different channel states and actions. In each time-slot the biosensor node's rate of transmission can be calculated by where Φ represents the number of bits per symbol used for transmission of A n samples during F channel uses.G represents the size of incoming samples in terms of bits. If we set G = F , the rate will be equal to Φ.We can transmit different number of samples by changing the number of bits per symbol.If we set number of bits per symbol equal to number of samples transmitted in a time slot, then Φ(A n ) = A n ; i.e., the transmission rate becomes equal to the action suggested by the optimal policy. IV. MDP FORMULATION The global state of the system, denoted by S, consists of three variables and the state space is given by where T is the temperature state variable.The size of the state space is thus the product of the number of channel states, number of buffer states, and number of temperature levels.In this section, two MDP formulations are given.They differ in whether the temperature is part of the global system state or a constraint. An important element of any MDP formulation is the sytem state transition probability matrix.This matrix describes how the system transitions from one state to another.We assume that state variables are independent.Thus, the state transition probability matrix of the system can be calculated by simple multiplication of the transition probabilities of the channel and buffer state variables.The temperature state variable plays no role in the computation of the state transition probability matrix of the system.This is because it is not random. Hence, the following equation gives the state transition probability matrix of the system. where s, c and b represent the current state of the system, wireless channel and buffer, respectively.On the other hand, s , c and b represent the next state of the system, wireless channel and buffer when action a is performed.The next state of the wireless channel is independent of the current action. The current action a determines the next state of the buffer only. The solution of an MDP formulation is referred to as a policy which is a mapping from the system state space to action space.That is, a policy determines the best action that should be performed in each possible state of the system.An optimal policy guarantees an optimal behavior of the system. Two objectives are considered.The first one is to minimize the expected long-term average transmission power. where π(s i ) represents the action suggested by policy π and P (s i , π(s i )) is the instantaneous transmission power. The second objective, however, is to maximize the expected long-term average transmission rate. where R(s i , π(s i )) represents the instantaneous transmission rate. An important performance metric is the average loss rate which represents the expected number of samples that are dropped due to buffer overflow.The following equation shows how the number of samples lost in time slot n is computed for a specific state s and action a. The average number of lost samples can be computed using the first moment as follows. The instantaneous delay during a time slot n can be computed as follows. where b n is the instantaneous buffer size during time slot n. The expected long-term average delay is the following. Finally, our thermal model is discussed.In this model, the increase in temperature is directly proportional to the magnitude of the action.For example, transmitting one sample during the best channel state (C n = 0) will increase the temperature by one unit.The following equation is used for computing the instantaneous temperature increase. The long-term average temperature is mathematically expressed as follows. Notice that the expectation operator is dropped since temperature is not a random variable. Next, the details of the MDP models are given.First, in the average thermal increment model, the problem is formulated as a constrained MDP model where a particular objective function is optimized while putting various constraints on other QoS metrics .The first MDP formulation maximizes the system transmission rate while keeping the average power, delay, thermal increment and loss rate within given bounds.The second MDP model, on the other hand, optimizes the system power consumption while respecting a minimum transmission rate and keeping the biosensor network in a safe operating zone.Let x(s, a) indicate the decision variable in solving the MDP models obtained in previous section.x(s, a) represents the steady state probability distribution when the system is in state s and action a is performed.Based on different rewards and depending on the QoS parameters, we want to optimize x(s, a) to obtain an optimal policy which describes what action to take when the system is in state s.The MDP model proposed is solved using the LP algorithms in MATLAB [11] to obtain optimal operating policies for correlated wireless channel.The default mode for LP solver is to minimize the reward function. Since the problem is formulated as an average cost constrained MDP, there are certain basic constraints that must applied for each implementation. The first constraint ensures that the x(s, a) is a probability distribution with its sum over all pairs of system states and actions equal to one.The second constraint ensures that we are solving an average cost constrained MDP.The third constraint enforces that the decision variable x(s, a) is always positive.These basic constraints are common to all the LP models given in this paper. The first LP model is about the maximization of the transmission rate (i.e., throughput).The details of the model are as follows.In the next LP model, the objective is to minimize the average transmission power and use the other metrics as constraints.The following are the details of the model. The first constraint ensures that there is a minimum average throughput.The remaining two constrains put an upper limit on the loss rate and delay, respectively. B. LP Formulation for the Strict Temperature Model The LP formulation of the strict temperature model is similar to that of the thermal increment model discussed above.However, the reader is reminded that the system state now includes the temperature as a state variable.This represents a global constraint.Thus, there will be no explicit constraint on the temperature increase like in the previous LP models.The following are the details of the new LP model. C. Finding the Optimal Policy After solving the above LP models, a probabilistic distribution over the state-action space is obtained.We would would like to find a policy that tells us what action should be performed in each system state with a probability of one.This can be achieved as follows.π * (s, a) = x * (s, a) V. RESULTS AND DISCUSSION In this section we numerically solve the model proposed in the previous section to obtain the optimal policies and then simulate them.In our simulation, we are going to analyze the effect of various QoS constraints on the optimal policies.Then, we study the different optimal policies obtained by solving the average thermal increment and strict temperature models.The thermal behavior of the obtained policies is also discussed.The following system parameters are used in the model formulation and simulation.They are also described in Table II.Arrivals at the buffer input are assumed to be Poisson with an average arrival rate of three.Buffer size is set to eight samples.Eight channel states are considered.The state zero is assumed to be the worst with a very small gain.There are eight possible actions in each state of the system;i.e., transmitting from one up to seven samples or no transmission.Based on these system parameters, the MDP model is formulated as a linear program and solved using MATLAB.The slowly varying Rayleigh model is described in Table I.It has an average power gain of 0.8 and a Doppler frequency of 10 Hz. B. Analysis and Insights For the purpose of analyzing the effect of various constraints on the optimization of average transmission rate and average power consumption, we vary the magnitude of the constraints on the average loss rate, delay and thermal increments to study their effects on the objective function.Values of the input parameters are also varied and their effects on both the constraints and objective function are studied. First, the LP model expressed by equations (25)-28 is studied.Figure 4 shows the effect of varying L O .It can be seen that the average transmission power decreases as the average loss rate increases.Since more samples are allowed to drop when the loss rate constraint is increased, the optimal policy will use the least amount of power possible for transmission.Also, increasing the arrival rate increases the average power consumption of the system.This is because there will be more samples in the buffer which need to be transmitted.Figure 5 shows the effect of varying the delay (i.e., D O ).It can be seen that the value of the optimal average transmission power decreases as the average delay constraint is increased.This indicates that as the constraint on the average delay is increased, samples are allowed to experience more delays which results in a lesser average power consumption. The effect of changing the average arrival rate λ on the Fig. 6: The increase in the minimum average transmission rate constraint (R O ) causes an increase in the optimal average transmission power utilized.average delay constraint is studied next.Figure 5 shows the variations in the average delay and optimal average transmission power due to different arrival rates.The delay and average arrival rate have an inverse relationship.For example, for a fixed D O , the left side of the delay constraint in equation 28 will be reduced if we increase the average arrival rate.This in turn should increase the optimal average power consumption in order to achieve the same delay constraint.By contrast, the behavior observed in Figure 5 is the opposite.This can be explained by the fact that the delay is directly proportional to the buffer occupancy while it is inversely proportional to the average arrival rate.So, based on the insights obtained from Figure 5, we can conclude that the effect of the increased delay dominates the reduction achieved by increasing the average arrival rate which in turns reduces the average power consumption. We next study the effect of having a minimum average transmission rate requirement on the optimization of average power.The behavior obtained after applying the minimum average transmission rate constraint in equation ( 26) is shown in Figure 6.It can be seen that as the value of the constraint increases, the optimal average power consumption increases.This happens because the increase in the minimum average transmission rate constraint requires that the biosensor node transmits more samples.As a result, the optimal value of average power consumption increases. Next, the LP model expressed by equations (20)-( 24) is studied.In the same way, the value of P O is varied.The results are then plotted in Figure 6.It can be seen that the optimal average transmission rate increases as the average transmission power P O increases.This indicates that as the constraint on average power is increased, more power is available which can then be used to transmit a larger number of samples.Of course, this will result in higher transmission rates. The effect of increasing the arrival rate on average transmission rate is depicted in Figure 7.It can be seen that as Fig. 7: The effect of increasing the average arrival rate (λ) on the optimal average transmission rate as the average power constraint (P O ) increases. the average arrival rate increases the average transmission rate decreases.This is due to the fact that any increase in the average arrival rate causes an increase in the loss rate which in turns reduces the average transmission rate of the biosensor. Maximization of the average transmission rate can cause the temperature of the system to increase by a large amount.The minimization of the average transmission power indirectly minimizes the system's thermal state increment by minimizing the power consumption.However, for the maximization of the average transmission rate, we need to explicitly include a constraint that controls the increase in the thermal state of the system at symbol level.In order to study the effect of the constraint in equation ( 23), the value of T h is varied to obtain various optimal policies.The results are then used to calculate the optimal average transmission rates.Figure 8 shows that the average transmission rate increases as the average thermal increment increases.This is at the cost of damaging the tissues, of course.So, we should try to keep the thermal increase constraint as small. It should be pointed out that a change in the average delay constraint does not affect the average transmission rate.The reason for such behavior is that the delay depends on the buffer state and the average arrival rate.If we keep the average arrival rate constant, the delay becomes directly related to the state of the buffer.But, changes in the buffer state also cause similar changes in the transmission rate.As a result, the optimal average transmission rate stays constant as the average delay constraint is varied.However, if we increase the arrival rate at the input of the buffer, the average loss rate and the delay both increase.This will cause a reduction in the optimal average transmission rate as shown in Figure 9. C. Optimal Policies for the Thermal Increment Model In this section, we study the thermal increment model and how the thermal increment constraint affects the optimal 0.5 Fig. 9: Increasing the value of the average delay constraint does not have any effect on the average transmission rate.However, it decreases as the average arrival rate increases. policies. The optimal policy that results from solving the LP model in equations ( 25)-( 28) is plotted in Figure 10.The minimum average transmission rate constraint R O is set to 0.07, average delay constraint D O is set to 10 msec and average loss rate constraint L O is set to 2 Samples.The 3D plot indicates that as the channel state improves, the policy suggests to make a transmission.Similarly, an increased number of samples in the buffer also indicates that the transmitter should start sending more samples to the base station.However, since the objective is to minimize the average power consumption and the minimum average transmission rate constraint is quite small, a maximum of one sample is transmitted even in the best channel state.This has the advantage of reducing the temperature increase of the biosensor node.However, if we increase the minimum average transmission constraint to 0.35, it can be seen in Figure 11 that the number of samples transmitted as the buffer state improves is increasing. The optimal policies obtained from the different LP models have unique behaviors.They are observed to be monotonically increasing in the channel and buffer state of the system.This means that as the channel state improves or the buffer state increases, the optimal policy also increases monotonically. When embedding these policies into an actual hardware, we can define the actions in terms of increasing values of channel and buffer state information.The controller can make an easy decision based on these thresholds defined by the optimal policy.This behavior can thus help in the practical implementation www.ijacsa.thesai.org of these optimal policies on biosensor hardware. The optimal policies computed in the previous section are simulated using MATLAB and the results are compared with a greedy policy.In the case of the transmission rate maximization, the greedy policy works on the principle that it always tries to transmit the maximum number samples that are allowed under the given system state without exceeding the constraints of the average loss rate, average thermal increment, average delay and average transmission power.As for the transmission power minimization, the greedy policy works by transmitting the least number of samples possible without violating the required average transmission rate.Each data point is the result of running the simulation five times.Both policies are simulated for a different number of time slots and their results are compared.The performance of the average transmission rate maximization policy against the greedy policy is shown in Figure 12.Clearly, this Figure indicates that the optimal policy outperforms the greedy policy in terms of the total number of transmitted samples. D. Strict Temperature Model In this section, we are going to study the LP model expressed by equations ( 29)-(32).The obtained optimal policy is simulated and the temperature variations are observed.Similar to the previous approach, a comparison is performed with a greedy policy.The conclusion is that the optimal policy provides better performance. We choose four temperature levels to represent the temperature states in the model proposed for the strict temperature model.The lower and upper bounds on the temperature are set to 37 o C and 40 o C. The number of channel and buffer states are set to eight, respectively.The average arrival rate at the input of buffer is set to three.The optimal policy allows transmissions only when the temperature is in state one.For higher temperature states, the policy chooses the sleep action to keep the thermal state of the system within the provided constraints.Again, the behavior of the optimal policy is observed to be monotonic in the channel and buffer states.The policy ensures that more samples are transmitted as the state of the wireless channel and buffer improves.The average temperature and power constraints are also kept within bounds.It is also observed that when the temperature is in its worst state, the policy suggests not to transmit any samples in order to save the biosensor from going into the highest thermal state.Therefore, the optimal policy is also monotonic in terms of the temperature states. The optimal policy computed for the transmission rate maximization problem is also compared with a greedy policy that satisfies the constraints given in the model.The greedy policy always tries to transmit the maximum possible number of samples while respecting the QoS constraints.A running average for all the constraints is used to make the decision in each time slot.The simulation is run five times for each number of slots and the average results are calculated.Figure 13 shows the results obtained by running the simulation for up to 10000 time slots.The results indicate that the optimal policy again outperforms the greedy policy in terms of the total number of transmitted samples.However, the difference between the two is small as compared to the optimal policy for the previous average thermal increment model. VI. CONCLUSION In this paper, the problem of QoS provisioning in biosensor netoworks has been studied using the framework of MDPs.The newly proposed model captures the interaction between the wireless channel and buffer at a biosensor node.The obtained policies maximizes network throughput and lifetime under several QoS constraints.They are also monotonic which means that they can be easily realized.Further, the simulation of the thermal behavior of the optimal policies indicate that the strict temperature model provides a better control over temperature increase when compared to the average thermal Fig. 1 : Fig. 1: Biosensors are implanted inside the body of a human to collect physiological measurements and transmit them over a wireless channel to an access point for further processing. , a) × P (s, a) ≤ P O (21) s∈S a∈A x(s, a) × L(s, a) ≤ L O (22) s∈S a∈A x(s, a) × T (s, a) ≤ T h (23) s∈S a∈A x(s, a) × D(s, a) ≤ D O (24) The constraints in (21)-(25) makes sure that the average values of power consumption P (s, a), loss rate L(s, a), thermal increment T (s, a) and delay D(s, a) do not exceed their thresholds P O , L O ,T h and D O , respectively. min a) × R(s, a) ≥ R O (26) s∈S a∈A x(s, a) × L(s, a) ≤ L O (27) s∈S a∈A x(s, a) × D(s, a) ≤ D O max a) × P (s, a) ≤ P O (30) s∈S a∈A x(s, a) × L(s, a) ≤ L O (31) s∈S a∈A x(s, a) × D(s, a) ≤ D O (32) where x(s, a) represents the decision variable for the optimization of average transmission rate.P O , L O , T h and D O represent the thresholds on the average transmission power, loss rate and delay, respectively. Fig. 4 : Fig.4: Reduction in the optimal average transmission power as the average loss rate constraint (L O ) is varied. Fig. 5 : Fig. 5: The optimal average transmission power decreases as the average delay constraint (D O ) increases. Fig. 8 : Fig.8: The optimal average transmission rate increases as the average thermal increment (T h ) constraint increases. Fig. 10 : Fig. 10: Optimal policy for minimizing average power consumption with R O = 0.07, D O = 10 msec and L O = 2 Samples. Fig. 11 : Fig. 11: Increase in the minimum average transmission rate constraint (R O = 0.35) results in an increased number of samples transmissions in the optimal policy Fig. 12 : Fig. 12: Comparison of sample transmissions for different policies with a varying number of time slots. Fig. 13: Comparison of sample transmissions for different policies achieved by the average transmission rate maximization. TABLE II : Simulation parameters. TABLE I : Channel states and transition probabilities.
2017-05-03T21:07:46.538Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "74c28dc1e838b824d6c8a924a4e1c252b29d5a42", "oa_license": "CCBY", "oa_url": "http://thesai.org/Downloads/Volume7No7/Paper_74-Quality_of_Service_Provisioning_in_Biosensor.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "74c28dc1e838b824d6c8a924a4e1c252b29d5a42", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
268263669
pes2o/s2orc
v3-fos-license
Opto-controlled C9orf72 poly-PR forms anisotropic condensates causative of TDP-43 pathology in the nucleus Proteinaceous inclusions formed by C9orf72 derived dipeptide-repeat (DPR) proteins are a histopathological hallmark in ~50% of familial amyotrophic lateral sclerosis/frontotemporal dementia (ALS/FTD) cases. However DPR aggregation/inclusion formation could not be efficiently recapitulated in cell models for four out of five DPRs. In this study, using optogenetics, we achieved chemical-free poly-PR condensation/aggregation in cultured cells, with spatial and temporal control. Strikingly, nuclear poly-PR condensates had anisotropic, hollow-centre appearance, resembling anisosomes formed by aberrant TDP-43 species, and their growth was limited by RNA. These condensates induced abnormal TDP-43 granulation in the nucleus without the activation of stress response. Cytoplasmic poly-PR aggregates that formed under prolonged light stimulation were more persistent than its nuclear condensates, selectively sequestered TDP-43 in a demixed state and surrounded spontaneous stress granules. Our data suggest that poly-PR anisotropic condensation in the nucleus, causative of nuclear TDP-43 dysfunction, may constitute an early pathological event in C9-ALS/FTD. Anisosome-type condensates may represent a more common cellular pathology in neurodegeneration than previously thought. Highlights - Optogenetics can be used to model C9orf72 DPR condensation in cultured cells. - Opto-PR forms hollow nuclear condensates, and RNA limits their growth by fusion. - Opto-PR condensation leads to stress-independent TDP-43 pathology in the nucleus. - Cytoplasmic poly-PR assemblies are persistent and selectively sequester TDP-43. Graphical abstract Introduction A G 4 C 2 hexanucleotide repeat expansion (HRE) in the first intron of the C9orf72 gene is the most common genetic alteration associated with amyotrophic lateral sclerosis (ALS) and frontotemporal dementia (FTD) (DeJesus-Hernandez et al, 2011;Renton et al, 2011).Healthy individuals commonly carry 2 repeats, while C9-ALS/FTD is associated with ≥30 repeat lengths (Van Mossevelde et al, 2017).Production of dipeptide repeat (DPR) proteins from the C9orf72-HRE transcripts is one of the proposed mechanisms of repeat toxicity (Ash et al, 2013;Mori et al, 2013;Zu et al, 2013).Both sense and antisense C9orf72-HRE transcripts are translated in all reading frames via non-canonical, repeat-associated non-AUG (RAN) translation to produce five DPRs: poly-GA, -GR (sense), -PR, -PA (antisense) and -GP (both sense and antisense).All five DPRs have been detected in the patient CNS, primarily as cytoplasmic and less often nuclear inclusions in neurons and glia (Gendron et al, 2013;Mackenzie et al, 2015;Saberi et al, 2018;Sakae et al, 2018;Schludi et al, 2015).Whilst poly-GA and -GP are the most abundant, arginine containing DPRs (R-DPRs) poly-GR and poly-PR are the most toxic species in cells and in vivo (Kwon et al, 2014;Mizielinska et al, 2014;Moens et al, 2019).For example, nuclear poly-PR aggregation was sufficient to induce ALS-like phenotypes in non-human primates (Xu et al, 2023), and expression of this DPR was extremely toxic in mice (LaClair et al, 2020;Zhang et al, 2019).R-DPR inclusion pathology seen in patients and mouse models (Chew et al, 2019;Choi et al, 2019;Cook et al, 2020) is not easily reproducible in cultured cells, where R-DPRs typically display diffuse distribution (outside the nucleolus), even when overexpressed and independent of the repeat length; this is in contrast to poly-GA that readily forms large cytoplasmic aggregates (Frottin et al, 2021;Hartmann et al, 2018;Liu et al, 2022;Lopez-Gonzalez et al, 2016;Vanneste et al, 2019).High arginine content renders R-DPRs highly hydrophilic and hence soluble.Although poly-PR and poly-GR have similar biochemical properties, molecular dynamics simulations revealed that poly-PR is capable of (limited) self-association -forming either dimers or small amorphous oligomers, whereas poly-GR is not likely to form any stable oligomeric species (Zheng et al, 2021).Consistently, in vitro, R-DPRs undergo phase separation only in the presence of additional agents -RNA or proteins (Balendra et al, 2023;Boeynaems et al, 2017;Lee et al, 2016;Zhang et al., 2019).This suggests that certain factors in the cellular environment drive R-DPR loss of solubility in ALS/FTD.Sensitive immunoassays revealed that soluble DPRs are less abundant in the more affected brain regions, as compared to the relatively spared regions (Quaegebeur et al, 2020). Intermediate products of R-DPR aggregation may contribute to cellular dysfunction, however due to difficulty of modelling this molecular event, their role could not be investigated in the cellular setting.We hypothesised that the use of optogenetics (Park et al, 2017;Shin et al, 2017) will allow circumventing the intrinsic solubility of R-DPRs, by triggering and maintaining an oligomerised R-DPR state.Indeed, using Cry2olig tagging, we were able to reliably induce condensation of poly-PR ("opto-PR") in cultured cells, with spatial and temporal control.Using this model, we show that: i) Poly-PR can form nuclear condensates with a specific ordered arrangement reminiscent of TDP-43 "anisosomes" (Yu et al., 2021), and these assemblies trigger abnormal nuclear granulation of TDP-43.ii) Poly-PR cytoplasmic aggregates can form by clustering around stress granules and can nucleate cytoplasmic TDP-43 aggregates, whilst maintaining a TDP-43/DPR demixed state.These findings unveil a converging molecular mechanism for aberrant C9orf72-DPR and TDP-43 species -formation of ordered nuclear condensates.Furthermore, they link putative early disease-stage C9-ALS/FTD species to TDP-mechanism of large DPR inclusion nucleation/seeding in the cytoplasm. Optogenetic modeling of C9orf72 poly-PR condensation in cells We utilised Cry2olig, a Cry2 variant with high oligomerisation capacity (Taslimi et al, 2014), in an attempt to induce and maintain R-DPR self-association.'Opto-DPR' constructs were generated for codon-optimised expression of poly-PR, poly-GR and additionally poly-GP (36 repeats), tagged with Cry2olig and mCherry on the N-terminus (Fig. 1A).Opto-DPRs demonstrated a subcellular distribution pattern in HeLa cells typical for the 30-1000 repeat range (Bennion Callister et al, 2016;Kanekura et al, 2018;Liu et al., 2022;Vanneste et al., 2019), where poly-GR and -GP were predominantly cytoplasmic and poly-PR was predominantly nuclear, with high enrichment in the nucleolus; none of the DPRs showed signs of aggregation (Fig. 1B).Upon single-pulse light stimulation on a custom blue-light array, Cry2olig vector control, opto-PR and opto-GP but not opto-GR readily formed visible clusters/foci (Fig. 1B), consistent with the low poly-GR self-association capacity reported in simulation studies (Zheng et al., 2021).Opto-PR foci were small (<500 nm), dot-like and exclusively nuclear, whereas opto-GP formed large amorphous cytoplasmic inclusions (Fig. 1B).We focused on opto-PR thereafter, using opto-GR as a control. Nuclear opto-PR foci/condensates were negative for the two nucleolar markers tested, fibrillarin (FBL) and UPF1, confirming that they are not merely fragments of the nucleolus (Fig. S1A).Continuous 3-h blue-light stimulation led to an increased opto-PR condensate size, as compared to single pulse (Fig. 1C).By super-resolution microscopy (SRM), the condensates induced by a single light pulse were found to represent a mixed population of ~100 nm dot-like and ~250 nm spherical, hollow-centre assemblies resembling anisotropic vesicle-like structures formed by arginine-rich peptide/RNA mixtures in vitro (Alshareedah et al, 2020).Furthermore, the larger condensates formed after 3-h continuous light exposure were found to represent multiples of these spheroids (Fig. 1D).Although Cry2olig protein on its own also formed clusters throughout the cell in response to blue light (Fig. 1B) (Taslimi et al., 2014), these structures appeared disordered/filamentous and were clearly different from opto-PR condensates (Fig. 1D).Opto-PR condensates could be detected by a PR-repeat specific antibody used in human tissue studies (Fig. 1E).Therefore, poly-PR confers a specific architecture to light-inducible condensates. We next set up an imaging approach for simultaneous induction and tracking of opto-PR condensation on a confocal high-content imaging system.Even extremely short blue-light exposures and low 488 nm laser power (50 ms/5%) were sufficient to induce opto-PR foci; a combination of 500 ms exposure/80% laser power was used in subsequent experiments as consistently and robustly inducing condensates (Fig. S1B).Visible opto-PR condensates appeared within 2 min post-pulse and could be maintained by both short-and long-interval repetitive opto-stimulation (every 2 min or every 15 min, respectively, Fig. 1F).Condensates were reversible, typically resolving within ~14 min after the last light pulse (Fig. 1F).Opto-PR condensate nucleation was concentration-dependent, with more structures forming in higherexpressing cells (R=0.65) (Fig. 1G; Fig. S1C).FRAP analysis revealed limited dynamics of opto-PR within these assemblies, with low recovery after photobleaching of the entire structure, despite a significant amount of diffuse opto-PR in the nucleoplasm (Fig. 1H).A fraction of cells (22.0±8.7%)developed persistent opto-PR condensates after continuous light stimulation, which were still detectable after 3 h of recovery in the dark (Fig. 1I). Poly-PR was previously shown to interact with nucleophosmin (NPM1) and to co-partition with this protein into phase-separated droplets in vitro (Lee et al., 2016;White et al, 2019).Consistent with this, opto-PR condensates stained positive for NPM1, where NPM1 formed a "shell" around the condensates, suggesting its secondary recruitment (Fig. 1J).Interestingly, opto-stimulation induced opto-PR signal segregation in the nucleolus (Fig. S1D).In this, we observed intra-nucleolar demixing of opto-PR from NPM1, where the proteins fully co-localised in the granular component (GC) under dark conditions but formed two distinct phases after opto-stimulation (Fig. S1E,F).In contrast, opto-PR and FBL (the latter residing in the nucleolar dense fibrillar component, DFC) showed no co-localisation both under dark and light conditions (Fig. S1E,F).R-DPRs were shown to promiscuously interact with membraneless organelle (MLO) components -RNA and proteins with low-complexity domains, leading to wide-spread MLO dysfunction (Kwon et al., 2014;Lee et al., 2016;Lin et al, 2016;Liu et al., 2022).We investigated the effect of opto-PR and its condensation on MLOs, focusing on those in the nucleus due to the predominantly nuclear localisation of this DPR.Systematic analysis of four MLOs -nuclear bodies (Gems, Cajal bodies, paraspeckles and speckles) revealed only minor changes in the number and size in the presence of diffuse or condensed opto-PR (Fig. S2).Cytoplasmic stress granules (SGs) were not induced by opto-PR with or without blue-light stimulation (3 h continuous), consistent with its mainly nuclear localisation (data not shown). Thus, microscopically visible DPR self-assembly/condensation in cultured cells can be achieved using Cry2olig tagging, allowing the formation of DPR-specific, ordered assemblies.Opto-PR condensates are characterised by concentration-dependent growth and low dynamic properties associated with persistence, and sequester NPM1. RNA limits the growth of anisotropic poly-PR condensates in cells We next asked whether our opto-model can be used to characterise potential modifiers of poly-PR condensation.RNA was previously found to promote R-DPR phase separation in vitro (Balendra et al., 2023;Boeynaems et al., 2017;Gittings et al, 2020).More recently, using RNAprotein crosslinking, R-DPRs have been shown to bind RNA in cells, in particular ribosomal RNA (rRNA), with a preference for GA-rich sequences (Balendra et al., 2023;Ortega et al, 2023).Using electrophoretic mobility shift assay (EMSA) with an RNA oligonucleotide representing a naturally occurring RNA sequence containing 5xGA repeats (Clip34nt) (Bhardwaj et al, 2013), we indeed observed that synthetic poly-PR and -GR peptides (10-mers) form complexes with RNA (Fig. 2A). Having confirmed RNA binding, we next studied the impact of RNA depletion on opto-PR condensates.Opto-PR expressing cells were treated with two transcriptional blockers, a global inhibitor (actinomycin D) and an RNA polymerase II-specific inhibitor (5,6-Dichloro-1-β-Dribofuranosylbenzimidazole, DRB), followed by induction and tracking of opto-PR condensates in individual cells for up to 4 h (long-interval repetitive stimulation, in the presence of the inhibitor).Both inhibitors caused dramatic nucleolar shrinking confirming their activity in cells (Fig. 2B) (Shav-Tal et al, 2005).Opto-PR condensates formed in RNA-depleted conditions appeared noticeably larger compared to control (Fig. 2B).Similar result was obtained with a 3-h continuous opto-stimulation on blue-light array, and quantification confirmed a significant increase in the opto-PR condensate size (Fig. 2C).These larger structures remained negative for the nucleolar marker FBL (Fig. 2C).DRB is a reversible inhibitor, which allowed us to analyse possible changes in the stability of RNA-depleted opto-PR condensates.Cells were stimulated for 3 h in the presence or absence of DRB followed by 2 h of recovery in the dark.Opto-PR condensates formed in the presence of DRB appeared significantly more persisting, still detectable following the recovery in 63±6.4% cells, compared to 30±7.8% cells in control condition (Fig. 2D). Increase in the opto-PR condensate size upon actinomycin D treatment was accompanied by a decrease in their number (Fig. 2C), suggesting clustering or fusion.SRM analysis revealed that opto-PR condensates in actinomycin D treated cells were no longer clusters of individual ~250 nm spherical units seen in the untreated or DRB condition but instead represented larger (>500 nm) hollow-centre spheres (Fig. 2E).RNA on the surface of condensates has been shown to limit their fusion, including in cells (Cochard et al, 2022).Actinomycin D but not DRB depletes ribosomal RNA (rRNA), and we observed dense rRNA signal on the opto-PR condensate surface which disappeared after actinomycin D treatment (Fig. 2F).Therefore, the opto-PR condensates in actinomycin D treated cells may form by fusion of the smaller units due rRNA depletion from the surface, followed by relaxation into a larger spherical assembly. Another putative modifier of R-DPR self-association/condensation is arginine dimethylation (DMA), which increases the fluidity of R-DPR droplets in vitro and was found enriched in R-DPR inclusions in C9-ALS/FTD patient tissue (Gittings et al., 2020).We employed two small molecule methyltransferase inhibitors, MS023 inhibiting five type-I methyltransferases that synthesise asymmetric DMA (aDMA) and EPZ015666, a specific inhibitor of PRMT5 responsible for most symmetric DMA (sDMA) (Chan-Penebre et al, 2015;Eram et al, 2016).Opto-PR expressing cells were treated with MS023 and EPZ015666 for 24 h and then exposed to blue-light for 3 h continuously, followed by opto-PR condensate quantification.We observed a mild decrease in the condensate number after aDMA depletion without changes in their size or structure (Fig. S3).Thus, removing DMA marks may attenuate opto-PR condensate nucleation, although this effect is small.Therefore, our cellular opto-PR model can be utilised to analyse modifiers of poly-PR selfassembly in the cellular context, as exemplified by the modulatory effect of RNA that we have uncovered. Poly-PR condensation induces nuclear TDP-43 pathology R-DPR interactomes are enriched in RNA-binding proteins (RBPs) (Kwon et al., 2014;Lee et al., 2016).We therefore examined the effect of nuclear opto-PR condensation on ALS/FTD relevant RBPs.TDP-43, FUS, NONO and SFPQ, all tagged with GFP, were co-expressed with opto-PR and their subcellular distribution was examined with and without opto-stimulation.Opto-PR presence per se did not affect RBP distribution (Fig. 3A; Fig. S4A).However, lightstimulated opto-PR expressing cells displayed a striking nuclear condensation phenotype for TDP-43 but not other RBPs analysed (Fig. 3A; Fig. S4A).Although we did observe TDP-43 condensation in a fraction of light-stimulated Cry2olig-expressing, it was significantly smaller than in opto-PR cultures; nor was it observed in light-stimulated opto-GR expressing cells (Fig. 3B).Opto-PR assemblies were frequently found in the physical contact with TDP-43 condensates (44±0.5% of all opto-PR foci) (Fig. 3C), suggestive of a direct nucleating effect of oligomerising opto-PR.Fractionation confirmed reduced solubility of TDP-43 GFP after induction of opto-PR condensation but not in light-stimulated opto-GR expressing cells (Fig. 3D).This analysis also confirmed reduced solubility of opto-PR but not opto-GR in light-stimulated cells (Fig. 3D). TDP-43 GFP nuclear condensates induced by opto-PR were strikingly similar in their morphology to the condensates formed during the recovery from arsenite stress (Wang et al., 2020;Cohen et al., 2015;Huang, Ellis et al., manuscript in revision).Like these stress-induced foci, TDP-43 condensates induced by opto-PR were devoid of polyA+ RNA (Fig. 3E).We therefore asked whether TDP-43 condensation elicited by opto-PR was due to an upregulated stress response.Classic cellular stress markers, phosho-eIF2α, GADD34 and ATF4, were not altered in opto-PR expressing cells after 3-h light stimulation (Fig. S4B,C).This is in contrast to the dramatic upregulation of these markers during the recovery from sodium arsenite stress used as a positive control (Fig. S4C).In addition to demonstrating the stress-unrelated nature of TDP-43 condensates induced by opto-PR self-assembly, this experiment also confirmed a lack of phototoxicity in our model.TDP-43 acetylation was shown to impair its RNA binding and enhance aggregation, with acetylated TDP-43 inclusions detected in sALS (Cohen et al, 2015).We investigated the effect of opto-PR condensation on an acetylation-mimic TDP-43 mutant, K145Q (Cohen et al., 2015).In agreement with the published data, TDP-43 K145Q was prone to spontaneously forming nuclear foci/granules in naïve cells (Fig. S4D).Opto-PR condensation potentiated the proaggregating effect of acetylation, further increasing nuclear granulation of TDP-43 K145Q (Fig. 3F,G). We next examined the effect of R-DPRs on TDP-43 higher-order assembly in vitro, using the condensate immunodetection/imaging assay we recently developed (Hodgson, Huang et al., 2024, doi: 10.2139/ssrn.4721338) (Fig. 3H).Both poly-PR and poly-GR peptides were included in these studies."Supernatant" fraction of recombinant TDP-43 (after removal of preformed aggregates) containing small protein clusters and soluble protein was incubated with equimolar amounts of synthetic R-DPR peptides, followed by sedimentation and fixation of TDP-43 clusters on coverslips for immunostaining/imaging; in parallel, samples were fractionated by centrifugation for western blot analysis (Fig. 3H).Addition of both R-DPRs significantly enhanced TDP-43 clustering -manifested as an increased cluster size and decreased cluster number, with poly-GR being more potent (Fig. 3I).Fractionation confirmed increased partitioning of TDP-43 to the pellet fraction in the presence of R-DPRs (Fig. 3J).In contrast, addition of a peptide with a "generic" sequence moderately enriched in proline and containing no arginine residues (V5: GKPIPNPLLGLDST) did not induce TDP-43 clustering (Fig. S4E), ruling out a non-specific molecular crowding effect of R-DPRs. Collectively, these results suggest that poly-PR condensation can directly cause changes to the nuclear distribution of TDP-43 without activation of stress signaling. Cytoplasmic poly-PR assemblies are persistent and selectively sequester TDP-43 Having characterised the nuclear phenotypes, we asked whether our opto-PR model is amenable to reproducing the cytoplasmic pathology typical for DPRs.Continuous 24-h long stimulation on the blue-light array resulted in significant redistribution of opto-PR to the cytoplasm, with cytoplasmic foci formation in 32% of cells (>300 transfected cells analysed; Fig. 4A,B).This was accompanied by a reduction in the incidence of nuclear opto-PR condensates (from 94% cells after 3 h to 9% after 24 h of stimulation) (Fig. 4A,B).Nuclear and cytoplasmic opto-PR foci induced by 24-h opto-stimulation were persisting, with no significant decline observed after 8 h of recovery in the dark (Fig. 4A,B).This is in contrast to the nuclear opto-PR condensates forming after a 3-h stimulation that were largely cleared after 3 h of recovery in the dark (Fig. 4A; Fig. 1I).Furthermore, we found that in a small proportion of cells that developed spontaneous SGs, opto-PR assemblies surrounded SGs (Fig. 4C).Opto-PR redistribution to the cytoplasm was not due to the nuclear membrane damage/nuclear pore complex disruption, since nuclear retention of several RBPs was not affected in these cells (Fig. 4D).Therefore, impaired nuclear import of opto-PR under these conditions could be due to its submicroscopic oligomerisation in the cytoplasm.Strikingly, endogenous TDP-43 but not other ALS-related RBPs (FUS, NONO, SFPQ) were found to be enriched in the cytoplasmic opto-PR foci (Fig. 4D).In contrast, Cry2olig-only cytoplasmic structures induced by 24-h light stimulation were negative for TDP-43 (Fig. 4E).SRM revealed that cytoplasmic opto-PR assemblies were ~250 nm structures that, unlike nuclear condensates, were not hollow (Fig. 4F).It also revealed that in these assemblies, opto-PR and TDP-43 remained demixed, with the TDP-43 signal primarily on the surface (Fig. 4F).A small fraction of cytoplasmic opto-PR foci were positive for p62 (Fig. 4G) but none of them stained positive for ubiquitin (data not shown). We next examined whether opto-PR accumulated/aggregated in the cytoplasm after prolonged opto-stimulation affects stress-induced SGs (Boeynaems et al., 2017;Marmor-Kollet et al, 2020;Wen et al, 2014).Opto-PR and Cry2olig-only expressing cells subjected to long optostimulation or kept in the dark were analysed during the recovery from NaAsO 2 (3 h time-point), using G3BP1 as a marker.SG disassembly was delayed in light-stimulated opto-PR expressing cells compared to unstimulated opto-PR cells or Cry2-only expressing (stimulated or unstimulated) cells (Fig. 4H).This finding further validates our cellular model as capable of reproducing the key molecular effects of R-DPRs on the cellular RNA/RNP granule metabolism. Therefore, our opto-model is amenable to the induction of cytoplasmic poly-PR accumulation and aggregation, with its cytoplasmic assemblies being significantly more persistent than those in the nucleus.Our data also point to a role for SGs in the growth of cytoplasmic poly-PR aggregates as well as a role for poly-PR assemblies in "nucleating" TDP-43 aggregation in the cytoplasm. Discussion A vast body of knowledge on C9-DPR related disease mechanisms has accumulated in the past decade yet it remains unclear whether, and if so how, DPR self-assembly, resulting in a C9-ALS/FTD hallmark pathology, contributes to the disease.Aggregation intermediates, and especially smaller, highly reactive oligomeric species, have been validated as toxic/pathogenic in the case of other neurodegeneration-linked proteins, such as tau and alpha-synuclein (Bengoa-Vergniory et al, 2017;Choi & Gandhi, 2018;Jucker & Walker, 2018).It is possible that equivalent DPR aggregation products play a role in C9-ALS/FTD.Poly-PR is the least abundant DPR in human tissue (Mackenzie et al., 2015;Davidson et al., 2016) despite theoretically it should be expressed at the levels similar to other DPRs (except poly-GP produced from both strands).Although this can be due to a variation in the expression mechanisms and antibody detection, it is also possible that neurons that accumulate poly-PR are lost early in disease due to its high toxicity, including its aggregation products.Although the inclusions of all five DPRs in the patient CNS are morphologically similar (Gendron et al., 2013), DPRs other than GA fail to form microscopically visible assemblies in cell models (Frottin et al., 2021;Liu et al., 2022;Zhou et al, 2017).In order to circumvent the high solubility of R-DPRs in cells, we harnessed the Cry2olig opto-module (Taslimi et al., 2014) successfully used previously to promote selfassociation of neurodegeneration-linked proteins (Berard et al, 2022;Jiang et al, 2021;Mann et al, 2019).In line with the molecular dynamics predictions (Zheng et al., 2021), poly-GR's selfassociation capacity was too low even when facilitated by Cry2olig, not yielding visible condensation in cells.However we succeeded in achieving the condensation of the oligomerisation-competent poly-PR. Two key phenotypes -alterations to the nucleolus and SGs -validate our opto-PR model in the context of the existing literature (Kwon et al., 2014;Marmor-Kollet et al., 2020;White et al., 2019).In our model, SG disassembly was impaired by cytoplasmic opto-PR (but not Cry2oligonly).This can be caused by altered composition, and hence dynamics, of SGs formed in the opto-PR rich milieu.We also observed sequestration of NPM1, a confirmed R-DPR interactor, into nuclear opto-PR condensates.Beyond these two phenotypes however, we failed to detect the ubiquitous MLO disruption by poly-PR reported in some studies, and its condensation also had limited effect on MLOs.This may be due to high variability of these phenotypes depending on the cell type, DPR levels and repeat length.It would be important to establish consensus phenotypes that can be used for the benchmarking of novel cellular DPR models, and the nucleolar and SG pathology validated in multiple studies are the prime candidate readouts. We found that poly-PR confers a specific ordered arrangement to the opto-induced nuclear condensates -sphere with hollow centre.Arginine-rich peptides form such anisotropic structures in the presence of RNA in vitro, both under RNA and peptide excess conditions (Alshareedah et al., 2020).Adopting this model, we speculate that upon interaction with cellular RNA, the small opto-PR oligomers nucleated with the aid of Cry2olig, form nanocondensates with a neutral "head" and a charged "tail" that subsequently coalesce into RNA-coated micelles (~100 nm granules).These transition into a vesicle-like conformation "layered" with RNA on both the internal and external surface (~250 nm granules).Upon (r)RNA depletion, these condensates undergo fusion into a larger (>500 nm) hollow-centre structure (Fig. S5A).Anisotropic condensate is a non-equilibrium state that may require ATP to be established and maintained (Bergmann et al, 2023), and in our system, this state is fuelled by the light-induced Cry2olig oligomerisation.Poly-PR was shown to form nuclear structures in C9-ALS/FTD, varying from compact inclusions to less dense "territories", in multiple studies (Cooper-Knock et al., 2015;Mori et al., 2013;Davidson et al., 2016;Wen et al., 2014).Nuclear poly-PR condensates not overlapping with nucleolar markers also form in transgenic mice (Zhang et al., 2019).It would be interesting to establish whether the structures seen in mice adopt a hollowcentre structure during their biogenesis.Anisotropic nuclear assemblies ("anisosomes") are formed by acetylated TDP-43 in cultured cells and in vivo (Yu et al, 2021).RBPs complexed with polyadenylated RNA form similar structures in cell models of spinal muscular atrophy (Narcis et al, 2018).Furthermore, nuclear condensates of RNA-binding deficient TDP-43 and ALS-linked CREST protein (Kukharsky et al, 2015) also possess this typical morphology (Fig. S5B).Remarkably, DDX3X mutations causative of neurodevelopmental disorders also form cytoplasmic hollow condensates, and those composed of an aggressive RNA-binding deficient mutant display low recovery in FRAP (Owens et al, 2023).It is possible that specific changes to the cellular metabolism in neurological disease, e.g.altered protein and RNA stoichiometries, favour this assembly type.RNA facilitates LLPS of poly-PR in vitro (Balendra et al., 2023) however cellular RNA restricts the growth of the non-dynamic opto-PR condensates in cells.Together with the reports on the solubilising effect of RNA on RBPs (Grese et al, 2021;Maharana et al, 2018;Mann et al., 2019;Shelkovnikova et al, 2014a) and on the wide-spread RNA degradation in ALS (Tank et al, 2018), our findings suggest than declining RNA levels may be a common factor underlying protein aggregation across ALS subtypes including C9-ALS. Transient nuclear TDP-43 condensation is a hallmark of stress response (Cohen et al., 2015;Wang et al., 2020).We recently found that this molecular event leads to TDP-43 loss of function and prolonged STMN2 depletion and is dysregulated by TDP-43 mutations (Huang, Ellis et al., 2024, manuscript in revision).These phenotypes may become persistent with chronic or repetitive stress, precipitating the disease.Here we show that poly-PR self-assembly causes TDP-43 condensation in the absence of stress signalling, indicating the convergence between C9-ALS and other ALS subtypes in this putative disease mechanism.Consistently, recent use of a specific RNA aptamer revealed abnormal nuclear TDP-43 granulation in motor neurons in ALS tissue (Spence et al., 2024). Cytoplasmic poly-PR assemblies in our opto-model are structurally dissimilar to and more persistent than the nuclear condensates, probably reflecting a different environment in the two cellular compartments.This higher stability is consistent with a higher frequency of poly-PR cytoplasmic inclusions in patients (Zu et al., 2013).DPR inclusions are rarely found to co-deposit with TDP-43 inclusions in the patient tissue, with clear anatomical region specificity (Al-Sarraj et al, 2011;Davidson et al., 2016).Some histopathological studies found evidence of DPR aggregation preceding the cytoplasmic TDP-43 pathology (Baborie et al, 2015;Vatsavayai et al, 2016).TDP-43 joins the cytoplasmic opto-PR foci where the two proteins remained in two phases.It is tempting to speculate that transient cytoplasmic poly-PR assemblies "seed" TDP-43 pathology.SGs may play an equivalent role for cytoplasmic poly-PR inclusions, concentrating the initial poly-PR assemblies in a confined space and promoting their coalescence into a larger aggregate.Indeed, a recent study showed that cytoplasmic TDP-43 aggregates can be nucleated within SGs, as a separate phase, and left behind after SG dissolution (Yan et al, 2024).Inclusions C9-ALS/FTD are likely composed of several DPRs, and their co-expression leads to different phenotypes in vivo (West et al., 2020).Opto-GP readily undergoes cytoplasmic condensation in our model, enabling future opto-DPR co-aggregation studies. Limitations of the study.We used a relatively short repeat length (which nevertheless is in the patient range), whereas the Cry2olig-mCherry is a large tag.However, multiple studies have demonstrated that DPR repeat lengths of 30-1000 have identical subcellular localisation with similar toxicity (Bennion Callister et al., 2016;Kwon et al., 2014;Miyagi et al, 2023;White et al., 2019).We cannot exclude that longer repeats will undergo condensation more readily, which should be tested in future studies.In addition, for this proof-of-principle study, non-neuronal cells were used for the ease of expression and imaging.To address potential cell-specificity of the phenotypes revealed in this study, the opto-PR model should be transferred into a (human) neuronal system. Cell culture, transfection and treatments HeLa cells were obtained from ATCC via Sigma, cultured in Dulbecco's Modified Eagle Medium/Nutrient Mixture F-12 (DMEM/F-12) supplemented with 10% foetal bovine serum (FBS) and penicillin-streptomycin.For time-lapse imaging, cells were plated on PhenoPlate-96 (black, optically clear bottom, PerkinElmer) at a density of 2 x 10 4 .For all other experiments, cells were seeded in 24 well plates, with or without coverslips dependent on the application, at a density of 5 x 10 4 cells, unless otherwise stated.Transfection was performed 24 h prior to blue light stimulation, using either Lipofectamine 2000 (ThermoFisher) or jetPRIME (Jena Bioscience) according to the manufacturer's instructions.For transcriptional inhibition cells were treated with 2.5 µg/ml actinomycin D or 5,6-dichloro-1-beta-D-ribofuranosylbenzimidazole (DRB) (both Sigma).For inhibition of arginine methylation, cells were treated with 10 µM MS-023 or EPZ015666 (both ApexBio).To induce cellular stress, cells were treated with 250 µM NaAsO 2 (Sigma).Incubation times for treatments within individual experiments are indicated in respective Figure legends. Opto-stimulation Cells expressing opto-constructs were stimulated with a 488 nm laser on Opera Phenix HCS (500 ms, 80% laser power) for live cell time-lapse imaging (under full environmental control conditions), or on a custom-built LED blue-light array housed in a humidified incubator maintained at 37℃ with 5 % CO 2 .Cells were protected from light between experiments and during fixation. Microscopy Conventional fluorescence microscopy was performed using 100x objective on an Olympus BX57 upright microscope equipped with an ORCA-Flash 4.0 camera (Hamamatsu) and cellSens Dimension software (Olympus).Super-resolution microscopy was performed using a 63x oil immersion objective on a ZEISS 980 laser scanning confocal microscope (LSM) with Airyscan 2 detector and ZEN Blue software.Time-lapse microscopy was performed using a 40x objective on Opera Phenix HCS, and Harmony 4.9 software was used for image processing and analysis (all PerkinElmer).Image processing and profile drawing was done using Image J or ZEN Blue software.Condensate/aggregate quantification was done manually or on Image J in a blinded manner. Fluorescent recovery after photobleaching (FRAP) Cells seeded at a density of 2.8 x 10 5 in glass-bottomed 35mm dishes (Ibidi), were transfected and 24 h post-transfection, subjected to 3-hr continuous stimulation prior to FRAP analysis.Imaging was performed using a 63x oil immersion objective on a ZEISS LSM 800 confocal microscope, equipped with a humidified incubation chamber maintained at 37℃ with 5 % CO 2 .FRAP acquisition was performed on condensates formed after 3 h of continuous stimulation.A circular region of interest (ROI) around each condensate was bleached using a 568 nm laser at 100 % laser power.Images were captured pre-bleach, immediately following bleach and at x 200 ms intervals during recovery.The mean fluorescence intensity within the ROI was determined for each image using ZEN blue software.Intensity values were corrected for bleaching during imaging and normalised to pre-bleach intensity.Average values were plotted and FRAP curves fitted using a one-phase association equation in GraphPad Prism 9 software. RNA expression analysis Total RNA was extracted using GenElute total mammalian RNA kit (Sigma) in accordance with the manufacturer's instructions.First-strand cDNA synthesis was performed using 500 ng of RNA with random primers (ThermoFisher) and MMLV reverse transcriptase (Promega).qRT-PCR was performed using qPCRBIO SyGreen Lo-ROX (PCRbio), and GAPDH was used for normalisation.Primer sequences are provided in a previous study (Shelkovnikova et al., 2017). In vitro analysis of TDP-43 condensation In vitro TDP-43 clustering analysis with immunodetection and imaging were performed as described in Hodgson, Huang et al., 2024Huang et al., (doi: 10.2139/ssrn.4721338)/ssrn.4721338).Briefly, 1 µM total recombinant or supernatant (after centrifuging at 13,300 rpm for 1 min) was mixed with 1 µM poly-PR/GR peptides (as above, BioSynth) or a generic peptide (V5: GKPIPNPLLGLDST, Proteintech) in the assay buffer, and incubated for 10 min.Samples were sedimented and fixed with glutaraldehyde on coverslips, blocked with 1% BSA in PBS for 1 h at RT and incubated with an anti-TDP-43 antibody (1:5000, mouse monoclonal, R&D Biosystems, MAB77781) in the blocking buffer for 2 h.After washes, TDP-43 protein clusters were visualised using anti-mouse AlexaFluor488 antibody (1:2000, ThermoFisher), incubated for 1 h at RT. Coverslips were mounted on a glass slide using Immumount (ThermoFisher).Images were taken using Olympus BX57 upright microscope and ORCA-Flash 4.0 camera and processed using cellSens Dimension software (Olympus).Quantification of assemblies was done using the 'Analyze particles' tool of Image J.For confirmatory western blot analysis, recombinant TDP-43 samples incubated with or without the peptides as above (for 10 min, and an additional set -for 4 h) were centrifuged at 1,000xg for 10 min.Pellet and supernatant were analysed by western blot using a C-terminal TDP-43 antibody (Sigma). Statistical analysis Analysis was done using respective tests on GraphPad Prism 9 software.N corresponds to the number of biological replicates.Error bars represent S.D. unless indicated otherwise.(F) Opto-PR condensate induction and tracking using confocal longitudinal imaging.Opto-PR expressing cells were stimulated with a 488 nm laser (at 80% for 500 ms), coupled with mCherry imaging.Cells were stimulated/imaged either every 2 min ("short-interval") or every 15 min for up to 4 h ("long-interval").Alternatively, cells with preformed condensates were imaged every 2 min without stimulation ("dissolution").Representative images are shown.Scale bar, 10 µm. (G) Opto-PR condensate nucleation is concentration-dependent. mCherry fluorescence intensity was measured in the nucleoplasm of individual cells, outside the nucleolus, and the number of condensates was quantified in the same cells at the peak of their assembly (7-min interval repetitive stimulation for 49 min).75 cells were analysed.Also see Fig. S1C.(A) Electrophoretic mobility shift assay (EMSA) with a natural GA-rich RNA sequence reveals R-DPR binding to RNA.Cy5-labelled synthetic nucleotide "Clip34nt" (34-mer) and synthetic poly-PR and poly-GR peptides (10 repeats) were used.Representative gel is shown. (B) RNA depletion promotes opto-PR condensate growth.Opto-PR expressing cells were pretreated with actinomycin D or DRB for 1 h, followed by long-interval repetitive blue-light stimulation (every 15 min) coupled with time-lapse imaging for up to 4 h (in the presence of the inhibitor).Representative images are shown.Scale bar, 10 µm. (C) Opto-PR condensates formed under conditions of actinomycin D-induced RNA depletion are larger in size and less numerous than in RNA-sufficient cells.Cells were opto-stimulated for 3 h continuously.Representative images and quantification are shown.Note that the larger condensates remain FBL-negative.30 and 60 cells per condition were included in analysis for condensate size and number, respectively, from a representative experiment.**p<0.01,***p<0.001,Student's t test.Scale bar, 5 µm. (D) Opto-PR condensates formed in RNA-depleted conditions are more persistent.Opto-PR expressing cells were light-stimulated for 3 h continuously, with or without DRB addition, followed by DRB removal and recovery for 2 h in the dark.(A,B) Prolonged, 24-h blue-light stimulation leads to cytoplasmic opto-PR redistribution and aggregation, with persistent assembly formation.Proportion of cells with opto-PR assemblies (nuclear and cytoplasmic) was quantified after 24-h blue-light array stimulation with and without 8-h recovery in the dark.Representative images (A) and quantification (B) are shown.Images for a 3-h stimulation and recovery are included for comparison.N=3 (300 cells analysed per condition).n.s., non-significant.Scale bar, 10 µm. Figure legends Figure 1 . Figure legendsFigure1.An optogenetic cellular system for controllable C9orf72 DPR condensation.(A)Opto-DPR approach.(B)Opto-DPR condensation can be induced by a single-pulse blue light stimulation.HeLa cells expressing the respective opto-DPR or empty Cry2olig-mCherry vector were analysed 24 h post-transfection.Blue-light array (single-pulse) was used.Scale bar 10 µm.(C)Continuous opto-stimulation induces larger opto-PR condensates as compared to single light pulse.HeLa cells expressing opto-PR or empty vector were subjected to either a singlepulse or a 3-h continuous blue-light stimulation.30 cells per condition were analysed from a representative experiment.**p<0.01,****p<0.0001,Kruskal-Wallis test with Dunn's post-hoc test.Scale bar 10 µm.(D)Super-resolution microscopy (SRM) demonstrates structural differences between Cry2oligonly and opto-PR assemblies and between the condensates formed after single-pulse and continuous (3-h long) blue-light stimulation.Representative images and graphical representation are shown.(E)Poly-PR, detected using a PR-repeat antibody, is enriched within opto-PR condensate rims.Representative image is shown. (H) FRAP analysis after full opto-PR condensate photobleaching reveals low protein mobility between the condensate and nucleoplasm.Representative image and FRAP curve for 25 cells from a representative experiment are shown.Error bars represent SEM.Scale bar, 10 µm.(I)Nuclear opto-PR condensates can become persistent.Opto-PR condensate were induced by 3-h continuous stimulation on blue-light array and the proportion of condensate-positive cells was quantified immediately post-stimulation or after 3 h of recovery in the dark.N=3 (300 cells analysed in total).(J)Opto-PR condensates are positive for nucleophosmin (NPM1).Opto-PR condensates induced by a 3-h continuous opto-stimulation were analysed by SRM.Representative images and profile plots are shown.Scale bar, 2 µm. Figure 2 . Figure 2. RNA limits opto-PR condensation in the nucleus. Representative images and quantification are shown.N=3 (150 cells analysed in total).**p<0.01,Mann-Whitey U test.Scale bar, 10 µm.(E) Opto-PR condensates formed in actinomycin D-treated cells are structurally different, as revealed by SRM.Representative images of condensates of a comparable size from control, DRB-or actinomycin D-treated cells induced by 3-h continuous opto-stimulation are shown, alongside with graphical representation.(F) Ribosomal (r)RNA depletion from the nucleus and opto-PR condensates in actinomycin D treated cells.Representative images are shown.Scale bar, 5 µm. (D) Endogenous TDP-43 but, not other RBPs, joins opto-PR assemblies induced by prolonged opto-stimulation.Also note normal nuclear localisation of all RBPs in cells with cytoplasmic opto-PR.Cells were opto-stimulated for 24 h continuously.Representative images are shown.Scale bars, 10 µm.(E) Cry2only cytoplasmic assemblies are negative for TDP-43.Cells were opto-stimulated for 24 h continuously.Representative images are shown.Scale bars, 10 µm.(F) TDP-43 remains demixed within opto-PR within cytoplasmic assemblies, as revealed by SRM.Cells were opto-stimulated for 24 h continuously.Representative image is shown.(G) Cytoplasmic opto-PR assemblies are occasionally positive for p62.Representative image is shown.Insets 1 and 2 shows examples of p62-positive and -negative assemblies, respectively.Scale bars, 10 µm.(H) SG dissolution is affected in cells with cytoplasmically localised opto-PR.SGs Representative were induced NaAsO 2 in cells expressing either opto-PR or Cry2olig-only, subjected to 24-h long opto-stimulation or kept in the dark.Efficiency of SG dissolution was analysed 3 h into the recovery (post-NaAsO 2 removal).Representative images and quantification are shown.N=3 (120 cells analysed per condition).*p<0.05,Two-way ANOVA with Sidak's multiple comparisons test.Scale bars, 10 µm.
2024-03-08T14:13:27.880Z
2024-05-28T00:00:00.000
{ "year": 2024, "sha1": "2af05d3339af44c98d3640fe740f6cc9757b4b3c", "oa_license": "CCBY", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2024/03/05/2024.03.05.581933.full.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "bfb09d27a0d9312af9204b32129c52a7ed3d40e7", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
251466036
pes2o/s2orc
v3-fos-license
Deep-Learning-Guided Student Classroom Action Understanding for Preschool Education A deep architecture for enhancing students ’ action recognition is proposed to improve preschool education. This paper seamlessly combines the teaching objectives, teaching scope, teaching implementation, and breeding evaluation status of preschool breeding practice theory. We attempt to solve the problem of e ff ective preschool teaching, based on which we propose the simple adaptation strategies. We further evaluate the practice of preschool breeding and its e ff ectiveness. In this way, civilized and high-quality preschool talents will be cultivated, and preschool educational experiences will be promoted. In the method of promoting the preschool culture of weak-aged children, owing to the problem that the traditional action recognition algorithm can indicate the speci fi c students ’ actions, an action recognition method based on the combination of deep integration and human skeleton representation is proposed. First, the connected spatial locations and constraints are fed into a long-short-speci fi ed recall (LSTM) mode with a spatially and temporally aware algorithm which is designed to obtain spatiotemporal feature and highly separable deep joint features. Afterward, a new mechanism is introduced to resolve keyframes as well as the joints. Finally, based on the two-stream deep architecture, the e ff ective discrimination of similar actions is achieved by integrating the color and shape features into the skeleton features by designing the deep model. Extensive experiments have demonstrated that, compared with the mainstream algorithms, this method can e ff ectively distinguish students ’ action types in the classroom of homogeneous preschool children. Thus, we can substantially improve the e ffi ciency of preschool teaching. Introduction As an important part of education, professional preschool teaching is a key factor in the quality of personnel training. Since the 20th century, with the disintegration of preschool education and its assumptions and speculations, all components within a society have gained a deeper understanding of the significance of preschool education. The point is the widespread application of theories such as the "dangerous limit" in practical education, and the world further clarifies the important role of preschool education in grasping the physical evolution of the church. As the foundation of basic teaching, preschool instruction will have a profound impact on an individual's future development and animation. Its content is not only the care and delivery of children but also a comprehensive reveal of the abilities and qualities of new students, adaptability, and transformation efficiency. This has carried out different explorations on the practice of preschool education in higher vocational colleges. The goal of professional art education is to improve students' teaching and theory strategy ability by leveraging practical activities. We utilized the method of survey and feedback to investigate the shoal guidance, teacher guidance, and students' level. It is observable that there are indisputable problems in the deduction of some professional plot guidance activities. The preschool education professional talent subject curriculum of each textbook determines the overall goal, including the cultivation of expert research talents, applied talents, and eclectic applied talents. No matter what the mark of ability education is, it has been incorporated into the school's practical ability to appease and guide all colleges to basically form a practical teaching awareness. The practical teaching system of variable colleges and universities can be roughly divided into two types: practice type and demonstration type. Besides, there is a lack of fair design and arrangement as a whole. This is restricted by factors such as art humbleness and individual differences among students. Large-scale and nonscale practical educational activities are relatively scarce. Although there are clear requirements for the total class hours and semesters toward each school year, the specific implementation needs to be decided in combination with the real situation of obedient education and kindergarten. All are the temporal-level arrangements, and the practical teaching objectives are not clear. Meanwhile, the performance of practical ability education is not unrealistic. Practical knowledge of preschool breeding embrace all practical activities other than paper extraction. The region of knowledge can be haphazardly lobulose into doctrinal and curious activities, design activities, thematic activities, preschool contrivance, seating associations, parenting product, and teaching activities. It is discernible from the survey anapophysis that the extent of practical education is relatively widen. However, the proportion of various contents in the preschool education alliance of colleges and universities is particular. Teaching activities account for more than 90% and less than 10% of close communication and preservation business. Because of the uneven scale of practical education at the preschool major league, students look at business problems. One is the fault of preserving work practice ability. The acquisition of nursing knowledge by analogy students in preschool education is mainly acquired through the road of aurora children's health and vitality. Second, children's communication items are not fully evaluated. The difficulty of preschool education lies not in teaching, but in observation, perception, and communication with limiters. It takes more time to study and learn. However, now, there is not enough time for actual teaching. The third is the lack of skills training. The largest proportion of the routes is the theoretical teaching, followed by teaching activities in five areas. The order of inheritance of skills such as piano teaching and skill classes is insufficient. Moreover, it is difficult to cherish the students' literacy. Fourth, lack of understanding of low-level examinations is an unavoidable problem. Students encounter problems in the process of learning, and the solution is generally to seek assistance from teachers. However, due to the uniqueness of teaching, it is smooth sailing to treat far-reaching problems as shallow problems, which lacks of the understanding of teachers. To facilitate preschool education by inspecting students more carefully, this paper proposes a two-drift network action recognition method that supports the combination of skeletal joints and appearance shapes. The proposed technique first constructs the spatial constraints supported by combined walking analysis sites. Thereafter, the obtained spatial constraints and junction coordinates are converted into pseudoprojections and subsequently fed into the LSTM to reduce the tip redundancy and increase key image and the importance of connection. We improve the causality of the joint richness from wording and then reintroduce the thermal stroke supported by the spatiotemporal attention mechanism to resolve important joint details in the image. We attempt to decompose the appearance shapes such as ring-skin interweaving. Finally, with the support of the doubleflow sound column, the appearance, and the inheritance of the depth feature of the joints, through the fusion of the origin, we realize the effective recognition of the Christian actions in the preschool education context. Based on the above, the contribution of this paper can be succinctly express in the vocation. (1) The temporospatial model instructions of the buried bone are forcibly salaried by the relevant except of the constructed and the transformable spatial constraints of the impetuosity-told joined. They are converted into the pseudoimages. (2) We build an LSTM model with a spatiotemporal care mechanism, by leveraging the time-scale weighted variance method to abandon the similar frames. This locates the resulting keynote reason and combination support on the heat map, which needs to build the basic connections as the apparent shape drop area. (3) Frame-by-frame liquefaction of handcrafted appearance features and complete sequences is calculated by LSTM based on a dual-radiation network for efficient and effective identification of homogeneous students' actions. Related Work There are two possible ways to cultivate usage in preschool, and other graceful activities rarely appear in professional practice legends [1][2][3][4][5][6][7][8][9][10][11]. Based on feedback from students and teachers on the implementation form, there is a limit to the number of internships per school year. In the first semester of high school, full-scale internship activities are carried out. Students will be exposed to the practical work of the first era. In addition, the content of the internship is not related to the internship. Internship qualification takes a long time, and it is comfortable to disrupt the situation before coordination. Although the microeducation activities carried out in the tutor's microclassroom are well-dressed and equipped with corresponding hardware and facilities, they are always in a hurry to complete the breeding career [2][3][4][5][6], and microeducation such as pretend classrooms are rarely included. The intellectual content of both apprenticeship and habitual activities appears in the appointment system, while the different forms of satisfying practice breeding are fragmented, disconnected, and have not yet formed a system. The practice teaching implementation platform for preschool bred adults is iron equipment to achieve the goal of practical breeding. It mainly includes laboratories, oncampus production platforms, on-campus management bases, and off-campus professional knowledge. Most of the business and full research bases in colleges and universities are in the preparatory stage, and the preschool breeding laboratories that can refine students' diversified and multistable teaching abilities are still in a state of insufficient quality. The off-campus etiquette base is an important support for cultivating practical activities and an important place for cultivating students' practical ability. Most college teachers and students report that the number of kindergartens available for internships is qualified, usually limited to a few stable kindergartens [7][8][9][10][11]. Fear of being heartbroken in the cycle of kindergarten doctrine makes it difficult for students to gain kindergarten approval during practice, and the 2 Applied Bionics and Biomechanics performance of skills is not as good as staying. The guiding role of teachers is an important guarantee for the realization of practical teaching goals and has firm variability and subjectivity. Outlook found that only about 20 percent of preschool major coalitions expect teacher guidance to play a very important role in hands-on knowledge activities. Therefore, from the perspective of local conditions, teachers' movement in professional skills activities is well deserved, but the direction of supervisors is obviously insufficient. Shotton et al. [5][6][7] argue that burning a temporal continuum and a highly separable collaborative enlightenment representation can improve the performance of action recognition. Vemulapalli et al. [8] uses 3D joint coordinates to analyze motion examples to inform actions, and the proposed heuristic extraction system used is simple and effective. However, this course ignores the spatial relationship between combinations, limiting impartiality. To answer this question, Ahmed et al. [9] encodes connections in a way that corresponds to divergence and intrigue to improve accuracy, but relying solely on skill-trick shape recognition processes is unsatisfactory. As manual reporting unfolds, obscure literature models exploit nonlinear neural plexuses to extract deep action features to improve finesse [10]. Among them, to support the main spatial shape degradation ability of convolutional neurons (CNN, convolutional neural networks), Banerjee et al. [11] encoded the deboned sequence as a fake image and refuted the recognition by citing its deep shape support on CNN effect. However, the obtained codes show that the lack of temporal dominance information leads to limited improvement in accuracy. Addressing this problem, repetitive neural networks (RNNs) with benign temporal modeling dexterity can possess actions with eye-level accuracy. However, the inherent walking dispersion failure of RNN makes it difficult to learn long-term historical information [12]. Based on this, the protracted insufficientbound recall (LSTM, protracted short-word recall) diagram beauty the RNN passing education carryover building, prevail excellent lingering-boundary conditiondependent exhibition address, and can be maturely address to behavior notice [13][14][15]. Luo [16] encodes a joint time series as a sequence of effigy and uses LSTM examples to burst their ephemeron properties to fix the support behavior. However, the above-related intelligent network-based notice methods verify compose-by-originate, fault of keynote stageplayer and skill mining, acting succession, etc.; often, there is a large amount of suggestion of redundancy, which is related to the obtained enlightenment sparse kingly rigid and is highly separable, resulting in limited accuracy improvement. Based on this, the authors [13][14][15][16][17][18] proposed an LSTM (STA-LSTM, spatiotransient attention LSTM) model based on a spatiotemporal attention mechanism, which uses a spatiotemporal attention mechanism to decompose the skeletal form, and connect to reinforce the necessary performance according to the relevant pressure of the introduction, as well as the strengths of the ability to correct the subtleties of the movement. However, this method only observes the connection coordinates and ignores the spatial topological information, and the accuracy gain is limited. In the title, the abovementioned 3D screw-based related algorithm only considers the rich information of the screw and thoroughly expresses the action through the appearance shape. Our Proposed Method The intent gesture confirmation model mainly includes the following four capabilities: first, construct joint space constraints, that is, joint relative inconsistency and pride correlation combined dual. Second, construct LSTM model with spatiotemporal oriented clockwork. Third, the redboard placement is established accordingly. Finally, on the basis of the two-way spider web, the ottomy sequence is leveraged to give priority to the prison features. Thereby, the aerial features calculated by the aspect sequence are fused frame by frame to improve the accuracy. The combined proposal can effectively represent humanconsistent poses and thus can be claimed to be a highly separable operation. Behavioral accuracy can be modified by feeding fluid prison information into the deep plexus for sensible operational features of joint consequences. The mortal body can be divided into five parts: the near arm, the right soldier, the body, the left bow, and the straight leg. For all joint instants K (K = 25 in this publication), tk represents the coordinate of the combination k in the t-th. Then, all joint coordinates can be expressed as XTK, where T is the adjustment number in the sequence. It is not difficult to understand that whether it is static or sad, there is always a certain distance relationship between the combine; so, the referential reserve of connect can effectively describe the topic range of earth people and has the effective robustness of innovation in perspective and lighting. In addition, during the movement of the cool association, X t = x * y * z has a small turn level, and the pauses of the joints perform a directional cycle process around the cool prison; so, the coordinate focus can be considered. From this, the Euclidean distance between the hip joint and other joints can be expressed as Between any unit in the Christian skeleton, there is an unhesitating figure at the edge of a skeletal enrich, and the advancement of an undoubted joint will move a nearby prison in sync. Based on this comment, this untiled selects only the first and the other fix fashion with tall interactivity (i.e., only one or two edge-joint joint span) to make a confederated outgrowth teach constraint to weaken the computational complexity. Assert that the prison is C x where C describes the coordinates of the j-th joint relative to the i -th joint combined in the t-th design; that is, the spatial topology information is the two of them. To upwards, the primary and secondary narrative notices are Example combination suits are combined by only one margin. A typical composite pair is connected by two edges. In short, spatiotemporal complaints that strongly 3 Applied Bionics and Biomechanics characterize the joint sequence of behaviors can be formulated. It is generally accepted that image coherence and connectivity that can powerfully express actions are more important in purely operational processes [18]. Taking the sequence "bumps" as an example, bounce frames and edges are more indicative than honest frames and torso. Based on this, the figure transforms a spatiotemporal ad-supported LSTM dummy to load each origin and section to regurgitate its moment. As mentioned above, video reasons and each connection have different effects on action recognition. Based on this fact, this section weights each connection point supported by the spatial attention mechanism to reflect on its meaning to enhance behavioral distinguishability. Let the weight of all joints at time t be 1,wherein l means the range of the input feature tf. The reciprocal can be expressed as 1 tanh (t). Among them, in order to avoid the numerical submergence of the forward propagation proposition, the tanh enabling province is necessary, and wf and wh are the implicit nontransactions of the input data tf and the upper LSTM, respectively. Load vector for constant h t is given as follows. Color and structural features in activity confirmation can directly account for changes in the situation; so, apparent results containing rich similarity and interwoven information can serve as a valid supply of behavioral confirmation in support of rib proposals. It is painful to directly reflect subtle differences in motion if similar shapes are extracted from the entire image. Based on this, this cut uses heatmaps to place keyframes and joints and extracts similar interweaving histograms within an uncertain range around radius R as an efficient appendix to the unified depth feature. Since keyboard frames are often in a fixed state, the disputes between adjacent frames are small; so, extracting a large number of similar frames should be avoided to reduce computational complexity and improve accuracy. In this section, we decompose such conforming somatic cells with the contention of temporal attention weights per frame as a feature criterion and extract the cause with the largest load in the segment to represent this combined division. Note that the more similar the bounding frames, the more similar the burdens and the less the dissimilarity. Based on this, there is weight dispute between the inheritance framework iTi with weight β i and the allusion origin β (respecting consensus is the first design of each fragment). Based on this, the obstacle δ is the basis for the difference in the burden of the adjustment. When βc < δ, it indicates that the subsequent cause is similar to the current reference cause, when the frame is the new reference adjustment, which finally originates all references. It should be noticeable that the individual weight-bearing joints in the keyboard frame will influence the distinction of such actions. Also, the fermentation graph obtained by the weight-bearing of each joint action obtains a considerable combined reversal trend. The peripheral range reflects subtle variations of similar movements. On this basis, by extracting the side appearance and texture form, and adding the synergistic load, this is used to calculate the high-level face information, and the unity of behavior complaints can be powerfully obtained accordingly. Experimental Results and Analysis Based on the three public action notification datasets of NTU RGB-D, Northwestern-UCLA, and SBU interaction dataset, the agent-guile feature, CNN, RNN, and LSTM shape support action recognition methods, topic changes, and similar actions were purchased in the name of vista diversify. Diversification and other aspects are compared to verify the effectiveness of the proposed method. This experiment supports TensorFlow deep learning framework, processor Intel Core(TM) i7-7700, main frequency 3.60 GHz, 32 GB memory, and NVIDIA GeForce GTX 1070. The four-layer LSTM selector is the main network, which supports spatiotemporal research on a single LSTM, the number of neurons in each sill is 128, the visible feature essence circle is 5 pixels, the initial scientific rate is 0.002, and it is learned after every 30 maneges. Cost is the subject. To 10%, a stochastic ramp-down with momentum 0.8 is used as the optimization function Adam, balanced agent λ = 10 − 5, batch size 64, and dropout = 0:45 to intercept overfitting. The NTU RGB-D dataset is currently the RGB-D demeanor dataset with the largest test target and method category scalars [4]. The dataset consists of 40 subjects, inferring 60 action symbols, 56 880 video clips, and 3D skeleton data sequences from 3 different angles -45°, 0°, and 45°t hrough 3 Kinect V2 cameras. These include single daily actions (like distillation, vomiting, and gonorrhea), independent interactions (like grooming, tearing at debris, and kicking things), two-in-one interactions (like ther term "energizing"), and interactions such as drinking and gnashing, reading and writing, shaking hands, and leaving. The trial liability experiment classifies 40 nasty types into disciplines and experience sets [14], with drilling obstacles numbered 1, 2, 4, 5, 8, 9, 13, 14, 15, 16, 17, 18, 19, 25, 27, 28, 31, 34, 35, and 38. Quiet is the standard curd. There are drill strings and experiments in the Embarrass data that contains about 560 samples; in the crossview experiment, the first camera was the choice to collect relish. It is the test Embarrass, and the quiet one is the training set. School regulations and standards were set relative to 37 920 and 18 960 samples. The fidelity and loss twist corresponding to the drill curdle and discrimination set in the opposite subject and strabismus repetition disciplines was demonstrated in this section. As can be seen, as the training set increases, the fineness of the shape increases, and when the iteration reaches 220 clocks, the realistic expectation stabilizes, and the damage assessment converges. In the enhancement, supported by the NTU RGB-D dataset, the lateral subject and crossview accuracies are 88.73% and 90.01%, respectively, and the recognition generation can be represented by the tumult matrix. Each cippus and line is the inverse way of the predicted appearance and the corresponding royal family, the main deviation element represents the authenticity of the gesture, and the rest is the confirmation irregularity rate. The lateral inclination of similar interactions and the true rate of intersection judgment, namely, carousal dilution, mild executable and calling, pericope, writing, keyboard typing, and playing mobile calls, are not lower than 84% and 4 Applied Bionics and Biomechanics 86%, relatively; that is, the crosscompliance and crossview appropriate rates of excited hands and pass-through items do not exceed 80% and 88%, partially without frowning. In addition, the antiresponsibility and crosshorizon accuracy rates for other actions are 85%-92% and 87%-94%, respectively. It can be seen that the design system has a high accuracy in complex scenarios such as theme modification and perspective deviation. Based on the NTU RGB-D dataset, the skewed trends and unfortunate detection realities of the proposed method and mainstream methods are shown in Table 1. It can be seen that the nonconstant parameter joint skeleton based on LARP (Sleep Body Action Confirmation Feature) [8] and the dynamic skeleton based on 3D geometric relationship do not trade off deep spatiotemporal intelligence; so, the accuracy rate is not high; multifleeting 3D CNN map junction to 3D roam and extract low-level features through 3D CNN can greatly improve the accuracy to 66.85% and 72.58%, but it does not estimate the repetitive domain information informed by bones; ST-LSTM+TrustGate [7] and Two-Stream RNN seize relief unity as the input of dualcourse RNN, forcing full use of spatiotemporal instructions, but the input timing has free information redundancy, which overcomes the influence of notification; based on this, STA-LSTM [1] pays attention to spatiotemporal force mechanism that is supported. Identifying keyboard frames and unions increases the accuracy to 73.40% and 81.20%. However, the system only observes the joint features, ignoring the topological relationship, and the accuracy is not improved much; DS-LSTM (denoising sparse LSTM) [5] considers the respect guide set of the joint ground between frames and the fuzzy fusion+CNN [11] coding combination between the spatial relationship of the two to improve the accuracy, but the two appearance features are poor, terminating the confirmation ability; the conversation system inputs the spatial constraints into the spatial and transient mechanism. The LSTM example is effectively complemented by extracting complex spatiotemporal features and extracting common features supported by heatmaps, increasing the accuracy to 88.73% and 90.01%, indicating that the proposed order has high accuracy in complex scenes. To further verify the effectiveness of the proposed rules, with the support of the above datasets, the constraints on the shape of the spatiotemporal attention LSTM with spatial constraints and the impartiality of the feature joint model to the reverse approach are investigated. Using the STA-LSTM branch supported only on spatiotemporal notifications, the accuracy of STA-SC-LSTM is improved by 2.43%, 1.52%, and 0.83%, respectively; This shows that the constructed airspace constraints can verify the action confir-mation ability. This can be compared with STA-SC-LSTM based on joint clock features. The realism of dual-stream liquefaction can be corrected by 12.90%, 7.29%, 8.15%, and 3.13%, indicating that clear features can be used as an effective appendix. A function requires a function supported by the connection point. The ways in which spatiotemporal features are correlated are less discriminative from similar behaviors. Conclusions In this paper, an action recognition method based on the fusion of joint sequence deep spatiotemporal and apparent features is proposed. The proposed method firstly constructs joint spatial topological constraints to enhance the effectiveness of joint feature expression, secondly constructs an LSTM with spatiotemporal attention to locate highly separable important frames and joints, and then extracts color and texture apparent features around key joints based on heatmaps. The joint depth and appearance features are fused frame by frame to obtain a highly separable action expression. The experimental results show the NTURGB-D, Northwestern-UCLA, and SBU Interaction Dataset datasets. The normal students trained should not only impart knowledge but also have the practical ability to observe children's behavior and analyze children's psychology. If there is a disconnect between theory and practice, it will be extended. The adaptation period affects the development of early childhood education. Practical ability is not only formed by mechanical professors but also depends on students' experience and exercise in professional practice teaching. Based on this, higher vocational colleges should actively carry out practical teaching and build a systematic practical teaching system to improve the effectiveness of practical teaching, cultivate high-quality talents for preschool education, and promote the development of preschool education. Data Availability The data can be obtained by asking from the corresponding author. Conflicts of Interest The author declares no conflicts of interest.
2022-08-10T09:09:28.284Z
2022-08-08T00:00:00.000
{ "year": 2022, "sha1": "ec46bfa6ce9db19d5880b7cb4191cd735c9f9ac7", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/abb/2022/9416467.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0b7948598db3a196fd76a697af1802c5e4ce3a70", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
55668981
pes2o/s2orc
v3-fos-license
Teachers’ Burnout: Indicators of Burnout and Investigation of the Indicators in terms of Different Variables Burnout is characterized by emotional exhaustion, depersonalization, and reduced personal accomplishment. Since teaching profession is excessively demanding, requires effective communication, and leads one to suffer from emotional burnout, it is acknowledged as one of the professions with a great likelihood of burnout. The purpose of the present study was to analyze teacher burnout in reference to certain variables. Relational model was used in this study. A total of 163 teachers from various cities participated. Two different data collection tools were used in this study, namely "Personal Information Form" and "Maslach Burnout Inventory (MBI)". The first one was used to identify the demographics of the participants. The second data collection instrument, MBI, was used to reveal the degree of burnout experienced by participants. The inventory divided into three sub-dimensions: namely emotional exhaustion, personal accomplishment, and depersonalization. Results indicated that different variables contributed to teachers’ burnout scores in terms of being on high and low groups. For example, while "education level" variable contributed to emotional exhaustion subscale, the variable of socio-economic status of the region where the school is located contributed to depersonalization subscale. Furthermore, there were higher mean ranks for those teachers who worked as an Information and Communication Technologies (ICT) teacher. While expressing their views, the ICT teachers focused on their unhappiness resulting from what they had been experiencing in their discipline. It is evident that the use of technology plays a key role in effective learning at school. ICT teachers can make contributions in this respect. They can guide both other teachers and students for using technology in an effective way. Thus it is worth to bring attention to the fact that preventing burnout of ICT teachers is a precondition for improving the quality of education in schools. namely "Personal Information Form" and "Maslach Burnout Inventory (MBI)".The first one was used to identify the demographics of the participants.The second data collection instrument, MBI, was used to reveal the degree of burnout experienced by participants.The inventory divided into three sub-dimensions: namely emotional exhaustion, personal accomplishment, and depersonalization. Results indicated that different variables contributed to teachers' burnout scores in terms of being on high and low groups.For example, while "education level" variable contributed to emotional exhaustion subscale, the variable of socio-economic status of the region where the school is located contributed to depersonalization subscale.Furthermore, there were higher mean ranks for those teachers who worked as an Information and Communication Technologies (ICT) teacher.While expressing their views, the ICT teachers focused on their unhappiness resulting from what they had been experiencing in their discipline. Teachers' burnout Technology literacy In-service training School culture School administrators Attitudes Students Introduction Burnout, as a concept, was used by scientists in the 1960s to refer to chronic drug addiction.However, the term became more popular when a psychologist called Herbert Freudenberger (1980) used it to describe his condition resulting from overworking. Researchers have defined the term in a number of different ways since 1970s.For instance, Pines and Aronson (1988) identified a series of symptoms to define burnout, namely "physical exhaustion, desperateness and hopelessness, frustration, low self-concept, and negative attitudes to one's job, colleagues and life in general."A high level of burnout is a serious obstacle to one's accommodation to his/her environment.Similarly, Maslach (1976) reported that employees might lose their interest in and feel hostile to their job and colleagues.Mattingly (1977) considered burnout as a series of symptoms, behaviors and attitudes specific to each individual.According to Freudenberger and Richelson (1980), burnout is exhaustion or frustration over a particular objective, lifestyle, or relationship.What all these definitions have in common is that burnout makes one weaker and leads him/her to have difficulty in accommodation to life.In this context, it may be argued that desperateness is an appropriate term to define burnout. Research has also suggested that the level of burnout is not the same for every person; instead, it may vary from "slight burnout" to "serious burnout".In the profession of teaching, burnout is viewed as an accelerator of a number of severe problems, including "frequent absenteeism, low commitment to work, ailments, physical illness, inappropriate behaviors, and low teaching performance" (Huberman & Vanderberghe, 1999;Rudow, 1999).Similarly, Cordes and Dougherty asserted that burnout could lead to physical and mental problems, disruptions in social and family life, negative behaviors, smoking, and risks of drug and alcohol use.Having conducted widely accepted studies on burnout, Maslach identified three dimensions of burnout, namely emotional exhaustion, depersonalization, and reduced personal accomplishment.Emotional exhaustion stands for emotional burnout, depersonalization for interpersonal burnout and unresponsiveness, and reduced personal accomplishment for hopelessness at assessing one's own accomplishment (Brouwers & Tomic, 2000;Budak & Surgevil, 2005;Durr, 2008;Ergin, 1992;Gaines, 2011;Maslach & Jackson, 1981). Teacher Burnout Burnout is characterized by emotional exhaustion, depersonalization, and reduced personal accomplishment (Maslach and Jackson, 1981).Teaching is one of the professions in which stakeholders are subject to high levels of burnout.The reason for this is that teaching is excessively demanding, requires effective communication, and leads one to suffer from emotional burnout.Therefore, teaching is acknowledged as one of the professions with a great likelihood of burnout (Baltas & Baltas, 1993).There are many structural and organizational factors in teacher burnout, including but not limited to the public's diminished confidence in education and the gap between teachers' pre-service expectations and their actual classroom experiences (Dworkin, 2001). Teachers are under an increasing pressure to become more knowledgeable on and effective in their profession.Apart from academic expertise, they have other responsibilities as well.They are obliged to work with students who suffer from many emotional and behavioral problems.There are a lot of teachers who have difficulty in satisfying individual needs of their students owing to the lack of resources.According to Dorman (2003), burnout has a severely adverse impact on teachers' ability to sustain their job.A teacher with burnout begins developing negative attitudes and having communication problems with their students and other teachers, which, in turn, causes health problems and damage to their private life. There are findings in the literature suggesting that teacher burnout is a serious problem that is becoming more and more widespread in educational institutions.Several studies conducted abroad have revealed the relationship between burnout and work stress, job satisfaction, self-efficacy beliefs, and effort-reward imbalance (Dorman, 2003;Farber, 2000;Mykletun & Mykletun, 1999).The syndrome has also been heavily studied in Turkey, especially since the mid-1970s (Babaoglan, 2007;Cemaloglu & Sahin, 2007;Ercen, 2009;Gunduz, 2005;Otacioglu, 2008;Kirilmaz, Celen & Sarp, 2002;Peker, 2002;Tugrul & Celik, 2000;Tumkaya, 1996).Teachers with burnout are likely to have problems with their students, colleagues, administrators, and parents of their students-all the stakeholders in the educational process -and they are day by day alienated from their profession. Teachers are expected to cope on their own with many problems they are faced with in their profession.However, teachers need to feel competent and successful; in other words, they need motivation.Only in this way they can make their students feel the same way.If teachers feel unsuccessful and dissatisfied, this does not only result in problems between those teachers and their students; even the whole school is at risk.Teacher burnout is infectious.If a school has teachers with burnout, the whole school will be feeling in a similar way soon.Therefore, it is a problem that needs to be tackled as early as possible. Studies have been conducted on burnout experienced by a wide range of educationalists from faculty members to preschool teachers.In addition, the problem has been discussed in reference to a number of variables, including demographics, occupational variables, and psychological variables.Studies in the literature have primarily focused on the reasons for and solutions to teacher burnout.Some of them have discussed the matter in reference to such variables as age, satisfaction with the environment, views of professional prospects, gender, educational background, and experience in teaching. Teacher stress and burnout have significant influences, either directly or indirectly, on the whole society in general or families, administrators, students, and students' parents in particular (Friedman & Farber, 1992).Societies have been undergoing considerably rapid changes.Accordingly, teachers' roles and responsibilities, as well as expectations from teachers, have been changing.These expectations affect teachers' view of life and their teaching performance.In this context, it is safe to argue that burnout should be further studied and attempts should be made to identify the correlation between the syndrome and various variables.Identification of teachers' emotional exhaustion, as well as their depersonalization towards students and reduced personal accomplishment, will hopefully make contributions to revealing the overall status of teacher burnout.This will enable the syndrome, which cannot be overcome despite being overly studied, to be analyzed in the light of new perspectives in changing social structures and different solutions to be offered.In this context, the purpose of the present study is to reveal the overall status of teacher burnout and to identify the correlation between burnout and certain demographics. Teacher burnout is influenced by demographic factors (i.e.gender, age, educational background, experience, and marital status), institutional factors (i.e.administrative support, workload, classroom management, and work pressure), and environmental factors (i.e. the school environment and the classroom climate) (Basol & Altay, 2009;Budak & Surgevil, 2005;Ercen, 2009;Pines & Aranson, 1988).The purpose of the present study is to analyze teacher burnout in reference to certain variables.The following research questions were posed accordingly: 1. What are teachers' burnout levels in reference to their demographics?2. Do teachers' burnout levels differ significantly depending on gender, age, experience in teaching, educational background, school type, socio-economic status of school location, and discipline? 3. Do teachers' demographics enable them to be accurately classified as belonging to groups of low or high burnout? Method The study was based on a correlative survey model.These models treat a phenomenon as it is, without attempting to change or affect it, and they try to identify the degree and direction of differentiation between given variables (Buyukozturk, 2009;Fraenkel & Wallen, 2006). Study Group and its Characteristics The study was conducted with a total of 163 teachers in different disciplines from certain provinces of Turkey (e.g.Ankara, Aksaray, Trabzon, Istanbul, Kocaeli, Corum, Kirsehir, Izmir, Karabuk, Balikesir, Mus, Sirnak, and Eskisehir).Table 1 presents the distribution of the participants by their demographics, namely gender, age, experience in teaching, educational background, school type, socio-economic status of school location, and discipline. While 59.5% of the participants were female, the remaining 40.5% were male.Most of them were 20 to 30 years old (55.8%) and had been serving as a teacher for one to five years (42.9%).The great majority of them had a bachelor's degree (74.2%).Those who worked for public primary schools mostly defined the socio-economic status of the school location as intermediate.The discipline with the highest number of participants was Information and Communication Technologies (ICT; 47.9%) (47.9%) (Table 1). Data Collection Instruments The data for the study were collected using two instruments.The first one was the Personal Information Form, whereas the other was the Maslach Burnout Inventory.The instruments were administered to the participants online between April and June, 2012. The Personal Information Form was designed by the researchers themselves to identify the demographics of the participants, namely gender, age, experience in teaching, educational background, school type, socio-economic status of school location, and discipline.The form contained nine items.The Maslach Burnout Inventory was developed by Maslach and Jackson (1981) and adapted to Turkish by Ergin (1992) in order to reveal the degree of burnout experienced by participants.The inventory had 22 items divided into three sub-dimensions: nine items for emotional exhaustion (items 1, 2, 3, 6, 8, 13, 14, 16, and 20), eight items for personal accomplishment (items 4, 7, 9, 12, 17, 18, 19, and 21), and another five items for depersonalization (items 5, 10, 11, 15, and 22).The inventory was graded on a five-point Likert-type scale, in which "zero stood for never…… and four represented always" for emotional exhaustion and depersonalization.For personal accomplishment, however, "four represented never….and zero stood for always", for the sub-dimension contained positive statements unlike the other two sub-dimensions. In the study, the internal consistency coefficients were calculated for the validity and reliability of the Maslach Burnout Inventory.Cronbach's alpha was 0.887 for the overall inventory whereas the coefficients were .882,.805and .823for emotional exhaustion, personal accomplishment, and depersonalization respectively. Data Analysis The data were analyzed through descriptive statistics (percentage, mean, median, and standard deviation) and logistic regression.The level of significance was 0.05 for the analyses, which were conducted using SPSS 18.0. Those dependent variables that were not distributed normally were classified (as high or low) on the basis of a cutoff point, which was specified in accordance with median values not affected by extreme values.For the first sub-dimension of the Maslach Burnout Inventory, Emotional Exhaustion (EE), a value of 11 and higher was classified as 1, while those values lower than 11 were classified as 0. For the next sub-dimension, Depersonalization (D), a value of two and higher was classified as 1, whereas a value lower than two was classified as 0. The classification for the last sub-dimension, Personal Accomplishment (PA), was in the same way as the first sub-dimension. In order to identify the factors in teacher burnout, a logistic regression analysis was conducted for the dependent variables that were not distributed normally (Cokluk, Sekercioglu & Buyukozturk, 2010).In the study, the dependent variables were considered as two-dimensional categorical variables. The following are the independent variables of the study that were thought to have an influence on teacher burnout (xki): "0-Low", and "1-High". Findings and Discussion The findings were presented in the same order as the questions posed for the study in the form of answers to them. Teachers' Burnout Levels in Reference to their Demographics The first research question posed for the study was: "What are teachers' burnout levels in reference to their demographics?"Table 2 presents the distribution of the data obtained from frequencies, percentages, arithmetic means, and standard deviation values. The female teachers received lower scores in all the three sub-dimensions of the Maslach Burnout Inventory when compared to the male teachers (Table 2).A similar finding was reported by Basol and Altay (2009), who discovered that male administrators and teachers experienced greater levels of burnout in the sub-dimensions of burnout.Likewise, Otacioglu (2008) reported that male music teachers received significantly higher scores of burnout when compared to female music teachers.Those participants who were aged 20 to 30 years old received higher scores than the other age groups in emotional exhaustion and depersonalization whereas those who were aged 51 years old or older received a higher score than the other age groups in personal accomplishment (Table 2).A similar finding was reported by Otacioglu (2008), who discovered that the highest level of burnout was experienced by those teachers who were 26 to 35 years old.In this respect, it can be argued that age and length of service (and therefore professional experience) have an influence on burnout (Budak & Surgevil, 2005;Ormen, 1993;Otacioglu, 2008).In the present study, those participants who had been serving as a teacher for six to ten years received higher scores in all the three sub-dimensions of the inventory when compared to the other groups, whereas those who had been serving for 21 to 25 years received the lowest scores of all in all the three sub-dimensions.In other words, the teachers with more professional experience suffered from a lower amount of burnout.In addition, those teachers with an associate degree received higher scores than the others in emotional exhaustion and depersonalization.Even so, they received the lowest score in personal accomplishment.Similarly, Cemaloglu and Sahin (2007) concluded from their study on teachers that a lower educational status meant higher scores of burnout.These findings suggest that teacher burnout might be lessened if teachers are provided with opportunities to improve themselves academically. As for the school type, those teachers who worked for public primary schools received higher scores than the other groups in all the three sub-dimensions.Furthermore, those teachers who defined the socio-economic status of the school location as low received the highest scores of all in all the three sub-dimensions, whereas those who described the socio-economic status of the school location as high received the lowest scores in all the three sub-dimensions (Table 2).These two findings suggest that teacher burnout is influenced by the facilities the schools might have. 2-Teachers' Burnout Levels Depending on Gender, Age, Experience in Teaching, Educational Background, School Type, Socio-Economic Status of School Location, and Discipline The second question posed for the study was: "Do teachers' burnout levels differ significantly depending on gender, age, experience in teaching, educational background, school type, socioeconomic status of school location, and discipline?"In order to find answers to this question Kruskal-Wallis test was conducted.Table 3 presents the results of the Kruskal-Wallis test.According to the results of the Kruskal-Wallis test, teacher burnout significantly differed depending on age, experience in teaching, socio-economic status of school location, and discipline (p≤0.05).The mean rank of teacher burnout was lower for the female teachers when compared to the male teachers.In addition, the mean rank was higher for those who were 20 to 30 years old.As for length of service, the highest mean rank was for those who had been serving as a teacher for six to ten years.Furthermore, there were higher mean ranks for those teachers who had an associate degree, those who worked for public primary schools, those who defined the socio-economic status of the school location as low, and those who worked as an ICT teacher. A review of the participants' responses to the open-ended questions suggested that the ICT teachers, in particular, raised their voice louder regarding their negative perceptions.Some of the ICT teachers reported as follows: ICT teaching is finished; I believe that the department will be closed down soon.All ICT teachers have been made redundant.Within the scope of the optional intra-city appointments last week, ICT teachers were sent an official letter by directorates of national education, which explained that they had been made redundant and they had to ask to be appointed to another school.ICT teachers are forced to become a formatter.In short, graduates of Computer and Instructional Technologies like me feel blue.I cannot define what I am doing as "teaching."Maybe this is the reason why I expressed such pessimistic views (ICT Teacher,Participant no: 22) Recent developments make me get alienated from my discipline.I love my job, and I would feel better if our discipline was valued more.However, these recent developments cause me to become alienated even though I am a newly-recruited teacher.(ICT Teacher,Participant no: 4). While expressing their views, the ICT teachers reflected their unhappiness resulting from what they had been experiencing in their discipline.The reason for their negative ideas might stem from the reason that the course Information and Communication Technologies has been abolished and that they feel they have no purpose.It is evident that the use of technology plays a role in effective learning at school.ICT teachers can make contributions in this respect.They can guide both other teachers and students as to how to use technology in an effective way.In fact, they are reference guides for other teachers when it comes to, in particular, the integration of technology to the curriculum.In addition, they have a pivotal role to play in the extent to which the FATIH Project, one of the most popular projects in Turkey in recent years, can be successful.However, those ICT teachers who have such levels of burnout cannot be expected to be useful in the process. In terms of the degree of burnout, the concerns of the ICT teachers were also voiced by the classroom teachers.One of the classroom teachers emphasized that they were especially subject to the frustration of other teachers, underlying the necessity of supporting teachers in this regard: To me, I have sweated blood in my nine-year teaching life.What makes me really discouraged is my colleagues' frustration, not caring about their students, and putting a wrench in the works.Teachers' attitudes towards life are reflected in students, who demoralize me a lot when I am on duty.I am also the head of the disciplinary board, paying visits to classrooms and trying to solve problems.I get worn out when I observe that they do not make a slightest effort for some students.(Classroom Teacher,Participant no: 9). The extract above suggests that teacher burnout is infectious, and it can spread to other teachers like a virus if it is not dealt with.A review of literature indicates that teacher burnout is getting worse and worse on the part of classroom teachers (Babaoglan, 2007;Cemaloglu & Kayabasi, 2007;Cemaloglu & Sahin, 2007).Furthermore, research (Fernet et al., 2012;Skaalvik & Skaalvik, 2010) has demonstrated that there is a relationship between teachers' perceptions of school administrators and self-efficacy and that the relationship has a negative influence on all the three sub-dimensions of burnout.In the present study, it was concluded from their responses to the open-ended questions that especially the ICT teachers had negative perceptions of school administrators.Some of the teachers expressed their views in this respect as follows: What causes problems and frustration in our teaching life is not students, but their parents and procedures.Our productivity is lowered especially when school administrators believe that teachers are wrong and the only party to blame in most situations.(Classroom Teacher,Participant no: 11) With the trivet of parents, administrators and teachers, a school is the most productive place of education.I believe that a school cannot function well when one of these components is missing.I am a teacher who attempts to solve most of the problems I experience without resorting to the administration.I am on good terms with parents.However, school administrators are really incompetent in problem-solving; they humiliate teachers in the presence of parents.If my school can still provide education, it is thanks to the self-sacrificing efforts of teachers.(English Language Teacher,Participant,No: 17). I believe that administrators and some teachers consider other more knowledgeable teachers as a threat to themselves and thus do not want them to be successful in school.(ICT Teacher,Participant no: 20) The ICT teachers in the study also underlined the negative policies towards their discipline: With educational policies based on quantity rather than quality and Ministers of National Education undervaluing the profession of teaching, problems are becoming more and more irresolvable and causing teachers to lose their passion for teaching.(Turkish Language Teacher,Participant no: 23) I get alienated owing to the uncertainties surrounding the Information and Communication Technologies course and the likelihood of the course being abolished.(ICT Teacher,Participant no: 27) To sum up, even though I am not interested in economic aspects of my job I want to feel at ease and comfortable.Teachers are under the pressure of all parties.(Teachers are disregarded by the Ministry of National Education, inspectors, administrators, parents, and even students in some situations.)After all, we all know how little the public values our profession.We have lots of holidays!!!We laze around, teach 3 to 5 classes a day, and then leave!!! (Classroom Teacher, Participant no: 30) A recent example of problematic policies toward ICT teachers is that ICT Guidance, a title once specific to ICT teachers, can be granted to all teachers regardless of their discipline through Information and Communication Technologies Guidance Course (MEB, 2012), which lasts only for 100 hours, by the General Directorate of Teacher Training and Development.Such policies cause ICT teachers to feel insignificant and worthless and lead others to consider them unimportant and worthless. Prediction of Teachers' Burnout Levels by Demographic Variables The third research question posed for the study was: "Do teachers' demographics enable them to be accurately classified as belonging to groups of low or high burnout?"To find answers to this question logistic regression analysis was conducted.The results of the logistic regression analysis are presented in Tables 4, 5, 6, and 7. When compared to the initial model, the logistic regression model formed on the basis of all the independent variables could more effectively predict the classification of the participants as belonging to the groups of high or low burnout in terms of emotional exhaustion (X 2 = 19.869,p<.006), depersonalization (X 2 = 23.237,p<.002) and personal accomplishment (X 2 = 20.509,p<.005).The model that involved all the predictive variables could account for 62.3% of emotional exhaustion, 71% of depersonalization, and 64.2% of personal accomplishment.The Hosmer-Lemeshow test, which was conducted to test the goodness of fit for the model with the predictive variables included in the analysis, did not yield a significant result (p>0.05), which suggested that the model had acceptable goodness of fit and the model-data fit was sufficient. A review of the classification based on the logistic regression model indicated that 60.8% of the 79 teachers with low levels of emotional exhaustion could be accurately classified, whereas 66.3% of the 83 teachers with high levels of emotional exhaustion could be accurately classified.In addition, 47.8% of the 67 teachers with low levels of depersonalization could be accurately classified (32 teachers), while 87.4% of the 95 teachers with high levels of depersonalization could be accurately classified.Finally, 57.3% of the 75 teachers with low levels of personal accomplishment could be accurately classified (43 teachers), whereas 24.2% of the 87 teachers with high levels of personal accomplishment could be accurately classified (21 teachers).The statistics from the Wald test concerning emotional exhaustion showed that educational background made a significant contribution to being classified as belonging to the groups of high or low burnout (Table 5).A comparison of the odds ratios (Exp(ß)) for the predictive values indicated that educational background had an Exp(ß) value of 2.277.The data in Table 5 suggest that one-unit increase in the predictive variables will increase the log odds coefficient of educational background by 2.227 times. The statistics from the Walt test regarding depersonalization showed that socio-economic status of school location made a significant contribution to being classified as belonging to the groups of high or low burnout (Table 6).A comparison of the odds ratios (Exp(ß)) for the predictive values indicated that socio-economic status of school location had an Exp(ß) value of .588.The statistics from the Walt test concerning personal accomplishment showed that gender and discipline made a significant contribution to being classified as belonging to the groups of high or low burnout (Table 7).A comparison of the odds ratios (Exp(ß)) for the predictive values indicated that gender and discipline had an Exp(ß) value of 2.051 and .581respectively.The data in Table 7 suggest that one-unit increase in the predictive variables will increase the log odds coefficients of gender and discipline by 2.051 and 0.581 times respectively.Similarly, various studies have concluded that gender and burnout scores are correlated with each other (Babaoglan, 2007;Cemaloglu & Sahin, 2007;Ergin, 1992;Maslach, 1982;Peker, 2002).In their study, for example, Babaoglan (2007) and Ergin (1992) discovered that men get higher scores than women in personal accomplishment, one of the sub-dimensions of burnout, as was the case for the present study.Therefore, it can be argued that gender can be used to classify teachers as belonging to groups of high or low burnout.This is also the case for discipline.As a matter of fact, Babaoglan (2007) concluded that burnout levels differ depending on one's discipline. Recommendations The present study suggests that male teachers experience a greater amount of teacher burnout in all the sub-dimensions when compared to female teachers.In-depth studies could be conducted on the reasons for this.In addition, educational background has an influence on teacher burnout, for teachers with a higher educational background experience a lower amount of teacher burnout.It can be recommended on the basis of the finding that teachers should be provided with various opportunities to improve themselves personally, professionally, and academically. Results based upon the school type and the socio-economic status of the school location dimensions show that teacher burnout is influenced by the facilities schools might have.Thus, it could be claimed that improvements in the school facilities of schools especially in low SES communities could contribute to improvement of the quality of education offered.The teacher burnout level was higher among ICT teachers.In this context, it could be suggested that new research studies can be conducted on how this situation would affect the success of FATIH Project which is one of the most popular technology implementation projects in Turkey in recent years.The assumption underlying this is that ICT teachers are seen as reference guides for both other subject teachers and students as to how to integrate technology to the curriculum and how to use technologies in an effective way. Figure 1 : Figure 1: Dimensions of Burnout as Described by Maslach x1: Gender, x2: Age, x3: Experience in teaching, x4: Educational background, x5: School type, x6: Socio-economic status of school location, and x7: Discipline On the other hand, yi for the three dependent variables (emotional exhaustion-EE, depersonalization-D, and personal accomplishment (PA) was coded as follows: The participants' responses to the open-ended question yielded different factors and dimensions, which can be expressed as follows in Figure2: Figure 2 : Figure 2: Factors in Teacher Burnout Table 1 . The Participants' Burnout Levels by Demographics Table 2 . The Participants' Burnout Levels by Demographics X±S: Mean ± Standard Deviation EE: Emotional Exhaustion, D: Depersonalization, and PA: Personal Accomplishment. Table 3 . The Results of the Kruskal-Wallis Analysis on Identifying Teacher Burnout in Table 5 . The Results of the Logistic Regression Analysis on the Capability of Demographics to Table 6 . The Results of the Logistic Regression Analysis on the Capability of Demographics to Table 7 . The Results of the Logistic Regression Analysis on the Capability of Demographics to Predict Personal Accomplishment
2018-12-05T05:55:53.631Z
2014-08-06T00:00:00.000
{ "year": 2014, "sha1": "6bb38062f81d19390d05d2d7af9c0dfef5acc83b", "oa_license": "CCBY", "oa_url": "http://egitimvebilim.ted.org.tr/index.php/EB/article/download/2515/761", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "48edf7adae890e3539e734949e0149d70ce969ea", "s2fieldsofstudy": [ "Education", "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
208526910
pes2o/s2orc
v3-fos-license
Comparing Offline and Online Testing of Deep Neural Networks: An Autonomous Car Case Study There is a growing body of research on developing testing techniques for Deep Neural Networks (DNN). We distinguish two general modes of testing for DNNs: Offline testing where DNNs are tested as individual units based on test datasets obtained independently from the DNNs under test, and online testing where DNNs are embedded into a specific application and tested in a close-loop mode in interaction with the application environment. In addition, we identify two sources for generating test datasets for DNNs: Datasets obtained from real-life and datasets generated by simulators. While offline testing can be used with datasets obtained from either sources, online testing is largely confined to using simulators since online testing within real-life applications can be time-consuming, expensive and dangerous. In this paper, we study the following two important questions aiming to compare test datasets and testing modes for DNNs: First, can we use simulator-generated data as a reliable substitute to real-world data for the purpose of DNN testing? Second, how do online and offline testing results differ and complement each other? Though these questions are generally relevant to all autonomous systems, we study them in the context of automated driving systems where, as study subjects, we use DNNs automating end-to-end control of cars' steering actuators. Our results show that simulator-generated datasets are able to yield DNN prediction errors that are similar to those obtained by testing DNNs with real-life datasets. Further, offline testing is more optimistic than online testing as many safety violations identified by online testing could not be identified by offline testing, while large prediction errors generated by offline testing always led to severe safety violations detectable by online testing. I. INTRODUCTION Deep Neural Networks (DNN) [1]- [3] have made unprecedented progress largely fueled by increasing availability of data and computing powers. DNNs have been able to automate challenging real-world tasks such as image classification [4], natural language processing [5] and speech recognition [6], making them key enablers of smart and autonomous systems such as automated-driving vehicles. As DNNs are increasingly used in safety critical autonomous systems, the challenge of ensuring safety and reliability of DNN-based systems emerges as a difficult and fundamental software verification problem. Many DNN testing approaches have been proposed recently [7]- [11]. Among these, we distinguish two high-level, distinct approaches to DNN testing: (1) Testing DNNs as stand-alone components, and (2) testing DNNs embedded into a specific application (e.g., an automated-driving system) and in interaction with the application environment. We refer to the former as offline testing and to the latter as online testing. Specifically, in offline testing, DNNs are tested as a unit in an open-loop mode. They are fed with test inputs generated independently from the DNN under test, either manually or automatically (e.g., using image generative methods [9]). The outputs of DNNs are then typically evaluated by assessing their prediction error, which is the difference between the expected test outputs (i.e., test oracles) and the outputs generated by the DNN under test. In online testing, however, DNNs are tested within an application environment in a closed-loop mode. They receive test inputs generated by the environment, and their outputs are, then, directly fed back into the environment. Online testing evaluates DNNs by monitoring the requirements violations they trigger, for example related to safety. There have been several offline and online DNN testing approaches in the literature [12]. However, comparatively, offline testing has been far more studied to date. This is partly because offline testing does not require the DNN to be embedded into an application environment and can be readily carried out with either manually generated or automatically generated test data. Given the increasing availability of open-source data, a large part of offline testing research uses open-source, manually-generated real-life test data. Online testing, on the other hand, necessitates embedding a DNN into an application environment, either real or simulated. Given the safety critical nature of many systems relying on DNN (e.g., self-driving cars), most online testing approaches rely on simulators, as testing DNNs embedded into real and operational environment is expensive, time consuming and can be dangerous in some cases. While both offline and online testing approaches have shown to be promising, there is limited insight as to how these two approaches compare with one another. While, at a highlevel, we expect offline testing to be faster and less expensive than online testing, we do not know how they compare with respect to their ability to reveal faults, for example leading to safety violations. Further, we would like to know if large prediction errors identified by offline testing always lead to safety violations detectable by online testing? or if the safety violations identified by online testing translate into large prediction errors? Answers to these questions enable us to better know the relationships and the limitations of the two testing approaches. We can then know which approach is to be recommended in practice for testing autonomous systems, or if the two are somehow complementary and should be best combined. In this paper, though the investigated questions are generally relevant to all autonomous systems, we perform an empirical study to compare DNN offline and online testing in the context of automated driving systems. In particular, our study aims to ultimately answer the following research question: RQ1: How do offline and online testing results differ and complement each other? To answer this question, we use open-source DNN models developed to automate steering functions of selfdriving vehicles [13]. To enable online testing of these DNNs, we integrate them into a powerful, high-fidelity physicsbased simulator of self-driving cars [14]. The simulator allows us to specify and execute scenarios capturing various road traffic situations, different pedestrian-to-vehicle and vehicleto-vehicle interactions, and different road-topologies, weather conditions and infrastructures. As a result, in our study offline and online testing approaches are compared with respect to the data generated automatically using a simulator. To ensure that this aspect does not impact the validity of our comparison, we investigate the following research question as a pre-requisite of the above question: RQ0: Can we use simulator-generated data as a reliable substitute to real-world data for the purpose of DNN testing? To summarize, the main contribution of this paper is that we provide, for the first time, an empirical study comparing offline and online testing of DNNs. Our study investigates two research questions RQ0 and RQ1 (described above) in the context of an automated-driving system. Specifically, 1) RQ0: Our results show that simulator-generated datasets are able to yield DNN prediction errors that are similar to those obtained by testing DNNs with real-life datasets. Hence, simulator-generated data can be used in lieu of real-life datasets for testing DNNs in our application context. 2) RQ1: We found that offline testing is more optimistic than online testing because the accumulation of prediction errors over time is not observed in offline testing. Specifically, many safety violations identified by online testing could not be identified by offline testing as they did not cause large prediction errors. However, all the large prediction errors generated by offline testing led to severe safety violations detectable by online testing. To facilitate the replication of our study, we have made all the experimental materials, including simulator-generated data, publicly available [15]. The rest of the paper is organized as follows. Section II provides background on DNNs for autonomous vehicles, introduces offline and online testing, describes our proposed domain model that is used to configure simulation scenarios for automated driving systems, and formalizes the main concepts in offline and online testing used in our experiments. Section III reports the empirical evaluation. Section IV surveys the existing research on online and offline testing for automated driving system. Section V concludes the paper. II. OFFLINE AND ONLINE TESTING FRAMEWORKS This section provides the basic concepts that will be used throughout the paper. A. DNNs in ADS Depending on the ADS design, DNNs may be used in two ways to automate the driving task of a vehicle: One design approach is to incorporate DNNs into the perception layer of ADS primarily to do semantic segmentation [16], i.e., to classify and label each and every pixel in a given image. The software controller of ADS then decides what commands should be issued to the vehicle's actuators based on the classification results produced by the DNN [17]. An alternative design approach is to use DNNs to perform the end-to-end control of a vehicle [13] (e.g., Figure 1). In this case, DNNs directly generate the commands to be sent to the vehicles' actuators after processing images received from cameras. Our approach to compare offline and online testing of DNNs for ADS is applicable to both ADS designs. In the comparison provided in this paper, however, we use DNN models automating the endto-end control of steering function of ADS since these models are publicly available online and have been extensively used in recent papers on DNN testing [8]- [10], [18]. In particular, we use the DNN models from the Udacity self-driving challenge as our study subjects [13]. We refer to this class of DNNs as ADS-DNNs in the remainder of the paper. Specifically, ADS-DNN receives inputs from a camera, and generates a steering angle command. Figure 2 represents an overview of offline testing of DNN in the context of ADS. In general, a dataset used to test a DNN (or any ML model for that matter) is expected to be realistic to be able to provide an unbiased evaluation of the DNN under test. As shown in Figure 2, we identify two sources for generating test data for the offline mode: (1) datasets captured from real-life driving, and (2) datasets generated by simulators. For our ADS-DNN models, a real-life dataset is a video or a sequence of images captured by a camera mounted on a (ego) vehicle's dashboard while the vehicle is being driven by (1) Fig. 2. Offline testing using (1) real-world data and (2) simulator-generated data a human driver. The steering angle of the vehicle applied by the human driver is recorded for the duration of the video and each image (frame) of the video in this sequence is labelled by its corresponding steering angle. This yields a sequence of manually labelled images to be used for testing DNNs. There are, however, some drawbacks with test datasets captured from real-life. Specifically, data generation is expensive, time consuming and lacks diversity. The latter issue is particularly critical since driving scenes, driving habits, as well as objects, infrastructures and roads in driving scenes, can vary widely across countries, continents, climates, seasons, day times, and even drivers. As shown in Figure 2, another source of test data generation for DNN offline testing is to use simulators to automatically generate videos capturing various driving scenarios. There are increasingly more high-fidelity and advanced physics-based simulators for self-driving vehicles fostered by the needs of the automotive industry which increasingly relies on simulators to improve their testing and verification practices. There are several examples of commercial ADS simulators (e.g., PreScan [14] and Pro-SiVIC [19]) and a number of open source ones (e.g., CARLA [20] and Apollo [21]). These simulators incorporate dynamic models of vehicles (including vehicles' actuators, sensors and cameras) and humans as well as various environment aspects (e.g., weather conditions, different road types, different infrastructures). The simulators are highly configurable and can be used to generate desired driving scenarios. In our work, we use the PreScan simulator to generate test datas for ADS-DNNs. PreScan is a widely-used, high-fidelity commercial ADS simulator in the automotive domain and has been used by our industrial partner. In Section II-D, we present the domain model we define to configure the simulator, and describe how we automatically generate scenarios that can be used to test ADS-DNNs. Similar to real-life videos, the videos generated by our simulator are sequences of labelled images such that each image is labelled by a steering angle. In contrast to real-life videos, the steering angles generated by the simulator are automatically computed based on the road trajectory as opposed to being generated by a human driver. B. Offline Testing The simulator-generated test datasets are cheaper and faster to produce compared to real-life ones. In addition, depending on how advanced and comprehensive the simulator is, we can C. Online Testing Figure 3 provides an overview of online testing of DNNs in the context of ADS. In contrast to offline testing, DNNs are embedded into a simulator, they receive images generated by the simulator, and their outputs are directly sent to the (ego) vehicle models of the simulator. In this paper, we embed the ADS-DNN into PreScan by providing the former with the outputs from the camera model in input and connecting the steering angle output of the ADS-DNN as input command to the vehicle dynamic model. With online testing, we can evaluate how predictions generated by an ADS-DNN for an image generated at time t in a scenario impacts the images to be generated at the time steps after t. Specifically, if the ADS-DNN orders the ego vehicle to turn with an angle θ at time t during a simulation, the camera's field of view will be shifted by θ within a small time duration t d , and hence, the image captured at time t + t d will account for the modified camera's field of view. Note that t d is the time required by the vehicle to actually perform a command and is computed by the dynamic model in the simulator. With online testing, in addition to the steering angle outputs directly generated by the ADS-DNN, we obtain the trajectory outputs of the ego vehicle which enable us to determine whether the car is able to stay in its lane. Note that one could perform online testing with a real car and collect real-life data. However, this is expensive, very dangerous, in particular for end-to-end DNNs such as ADS-DNN, and can only be done under very restricted conditions on some specific public roads. We conduct an empirical study in Section III to investigate How offline and online testing results differ and complement each other for ADS-DNNs. To develop the domain model, we relied on the features that we observed in the realworld test datasets for ADS-DNN (i.e., the Udacity testing datasets [22]) as well as the configurable features of our simulator. The domain model includes different types of road topologies (e.g., straight, curved, with entry or exit lane), different weather conditions (e.g., sunny, foggy, rainy, snowy), infrastructure (e.g., buildings and overhead hangings), nature elements (e.g., trees and mountains), an ego vehicle, secondary vehicles and pedestrians. Each entity has multiple variables. For example, an ego vehicle has the following variables: a speed, a number (id) identifying the lane in which it is driving, a Boolean variable indicating if its fog lights are on or off, and many others. In addition to entities and variables, our domain model includes some constraints describing valid value assignments to the variables. These constraints mostly capture the physical limitations and traffic rules. For example, the vehicle speed cannot be higher than some limit on steep curved roads. We have specified these constraints in the Object Constraint Language (OCL) [23]. The complete domain model, together with the OCL constraints, are available in the supporting materials [15]. To produce a simulation scenario (or test scenario) for ADS-DNN, we develop an initial configuration based on our domain model. An initial configuration is a vector of values assigned to the variables in the domain model and satisfying the OCL constraints. The simulator generates for each of the mobile objects defined in a scenario, namely the ego vehicle and The position values are computed based the characteristics of the static objects, specified by the initial configuration such as roads and sidewalks, as well as the speed of the mobile objects. Table I summarizes the comparison between offline and online testing as detailed in Sections II-B and II-C. Briefly, offline testing verifies the DNN using historical data consisting of sequences of images captured from real-life camera or based on a camera model of a simulator. In either case, the images are labelled with the steering angles. Offline testing measures the prediction errors of the DNN to evaluate test results. In contrast, online testing verifies the DNN embedded into an application environment in a closed-loop mode. The test inputs for online testing are initial configurations of the simulator, generated based on our domain model (see Section II-D), that guide the generation of specific scenarios. The output of online testing is whether, or not, for a given simulation scenario, a safety violation has happened. In our context, a safety violation happens when the ego car strays out of its lane such that it may risk an accident. Since offline testing relies on historical data, it has a low execution time. However, the time required to perform online testing is relatively high because it encompasses the time required for the DNN-based ADS to execute and interact with its environment. Note that the execution time in Table I only refers to the time required to perform testing and not the time or cost of generating test inputs. E. Formalization In the remainder of this section, we formalize inputs and outputs for offline and online testing. We denote a real-life test dataset by a sequence r = (i r 1 , θ r 1 ), (i r 2 , θ r 2 ), . . . , (i r n , θ r n ) of tuples. For j = 1, . . . , n, each tuple (i r j , θ r j ) of r consists of an image i r j and a steering angle θ r j label. A DNN d, when provided with a sequence i r 1 , i r 2 , . . . , i r n of the images of r, returns a sequence θr 1 ,θ r 2 , . . . ,θ r n of predicted steering angles. The prediction error of d for r is, then, computed using two well-known metrics, Mean Absolute Error (MAE) and Root Mean Square Error (RMSE), defined below: To generate a test dataset using a simulator, we provide the simulator with an initial configuration of a scenario as defined in Section II-D. We denote the test dataset generated by a simulator for a scenario s for offline testing by sim(s) . For online testing, we embed a DNN d into a simulator and run the simulator. For each (initial configuration of a) scenario, we execute the simulator for a time duration T . The simulator generates outputs as well as images at regular time steps t δ , generating outputs as vectors of size m = T t δ . Each simulator output and image takes an index between 1 to m. We refer to the indices as simulation time steps. At each time step j, the simulator generates an image i s j to be sent to d as input, and d generates a predicted steering angleθ s j which is sent to the simulator. The status of the ego car is then updated in the next time step j + 1 (i.e., the time duration it takes to update the car is t δ ) before the next image i s j+1 is generated. In addition to images, the simulator generates the position of the ego car over time. Since our DNN is for an automated lane keeping function, we use the Maximum Distance from Center of Lane (MDCL) metric for the ego car to determine if a safety violation has occurred. The value of MDCL is computed at the end of the simulation when we have the position vector of the ego car over time steps, which was guided by our DNN. We cap the value of MDCL at 1.5 m, indicating that when MDCL is 1.5 m or larger, the ego car has already departed its lane and a safety violation has occurred. To have a range between 0 and 1, MDCL is normalized by dividing 1.5 by the actual distance in meter. III. EXPERIMENTS In this section, we compare offline and online testing of DNNs by answering the two research questions we have already motivated in Sections I and II, which are re-stated below: RQ0: Can we use simulator-generated data as a reliable alternative source to real-world data? Recall that in Figure 2, we described two sources for generating test data for offline testing. As discussed there, simulator-generated test data is cheaper and faster to produce and is more amenable to input diversification compared to real-life test data. On the other hand, the texture and resolution of real-life data look more natural and realistic compared to the simulator-generated data. With RQ0, we aim to investigate whether, or not, such differences lead to significant inaccuracies in predictions of the DNN under test. To do so, we configure the simulator to generate a dataset (i.e., a sequence of labelled images) that closely resembles the characteristics of a given real-life dataset. We then compare the offline testing results for these datasets. The answer to this question, which serves as a prerequisite of our next question, will determine if we can rely on simulator-generated data for testing DNNs in either offline or online testing modes. RQ1: How do offline and online testing results differ and complement each other? RQ1 is the main research question we want to answer in this paper. It is important to know how the results obtained by testing a DNN irrespective of a particular application compare with test results obtained by embedding a DNN into a specific application environment. The answer will guide engineers and researchers to better understand the applications and limitations of each testing mode. Autumn consists of an image preprocessing module implemented using OpenCV to compute the optical flow of raw images, and a Convolutional Neural Network (CNN) implemented using Tensorflow and Keras to predict steering angles. Chauffeur consists of one CNN that extracts the features of input images and a Recurrent Neural Network (RNN) that predicts steering angles from the previous 100 consecutive images with the aid of a LSTM (Long Short-Term Memory) module. Chauffeur is also implemented by Tensorflow and Keras. The models are developed using the Udacity dataset [22], which contains 33808 images for training and 5614 images for testing. The images are sequences of frames of two separate videos, one for training and one for testing, recorded by a dashboard camera with 20 Frame-Per-Second (FPS). The dataset also provides, for each image, the actual steering angle produced by a human driver while the videos were recorded. A positive (+) steering angle represents turning right, a negative (-) steering angle represents turning left, and a zero angle represents staying on a straight line. The steering angle values are normalized (i.e., they are between −1 and +1) where a +1 steering angle value indicates 25°, and a −1 steering angle value indicates −25°1. Figure 5 shows the actual steering angle values for the sequence of 5614 images in the test dataset. We note that the order of images in the training and test datasets matters and is accounted for when applying the DNN models. As shown in the figure, the steering angles issued by the driver vary considerably over time. The large steering angle values (more than 3°) indicate actual road curves, while the smaller fluctuations are due to the natural behavior of the human driver even when the car drives on a straight road. Table II shows the RMSE and MAE values obtained by applying the two models to the Udacity test dataset. Note that we were not able to exactly replicate the RMSE value reported on the Udacity self-driving challenge website [13], as the value in Table II is slightly different from those provided by Udacity. Reproducibility is known to be a challenge for state-of-the-art deep learning methods [26] since they involve many parameters and details whose variations may lead to different results. To enable replication of our work, we have made our detailed configurations (e.g., python and auxiliary library versions), together with supporting materials, available online [15]. While MAE and RMSE are two of the most common metrics used to measure prediction errors for learning models with continuous variable outputs, we mainly use MAE throughout this paper because, in contrast to RMSE, the MAE values can be directly compared with individual steering angle values. For example, MAE (d, r) = 1 means that the average prediction error of d for the images in r is 1 (25°). Since MAE is a more intuitive metric for our purpose, we will only report MAE values in the remainder of our paper. B. RQ0: Comparing Offline Testing Results for Real-life Data and Simulator-generated Data 1) Setup: We aim to generate simulator-generated datasets closely mimicking the Udacity real-life test dataset and verify whether the prediction errors obtained by applying DNNs to the simulator-generate datasets are comparable with those obtained for their corresponding real-life ones. As explained in Section III-A, our real-life test dataset is a sequence of 5614 images labelled by their corresponding actual steering angles. If we could precisely extract the properties of the environment and the dynamics of the ego vehicle from the real-life datasets in terms of initial configuration parameters of the simulator, we could perhaps generate simulated data resembling the real-life videos with high accuracy. However, extracting information from real-life video images in a way that the information can be used as inputs of a simulator is not possible. Instead, we propose a two-step heuristic approach to replicate the real-life dataset using our simulator. Basically, we steer the simulator to generate a sequence of images similar to the images in the real-life dataset such that the steering angles generated by the simulator are also close to the steering angle labels in the real-life dataset. In the first step, we observe the test dataset and manually identify the information in the images that correspond to some configurable parameter values in our domain model described in Section II-D. We then create a restricted domain model by fixing the parameters in our domain model to the values we identified by observing the images in the Udacity test dataset. This enables us to steer the simulator to resemble the characteristics of the images in the test dataset to the extent possible. Our restricted domain model includes the entities and attributes that are neither gray-colored nor bold in Figure 4. For example, the restricted domain model does not include weather conditions other than sunny because the test dataset has only sunny images. This guarantees that the simulatorgenerated images based on the restricted domain model represent sunny scenes only. Using the restricted domain model, we randomly generate a large number of scenarios yielding a large number of simulator-generated datasets. In the second step, we aim to ensure that the datasets generated by the simulator have similar steering angle labels as the labels in the real-life dataset. To ensure this, we match the simulator-generated datasets with (sub)sequences of the Udacity test dataset such that the similarities between their steering angles are maximized. Note that steering angle is not a configurable variable in our domain model, and hence, we could not force the simulator to generate data with specific steering angle values as those in the test dataset by restricting our domain model. Hence, we minimize the differences by selecting the closest simulator-generated datasets from a large pool of randomly generated ones. To do this, we define, below, the notion of "comparability" between a real-life dataset and a simulator-generated dataset in terms of steering angles. Let S be a set of randomly generated scenarios using the restricted domain model, and let r = (i r 1 , θ r 1 ), . . . , (i r k , θ r k ) be the Udacity test dataset where k = 5614. We denote by r (x,l) = (i r x+1 , θ r x+1 ), . . . , (i r x+l , θ r x+l ) a subsequence of r with length l starting from index x + 1 where x ∈ {0, . . . , k}. For a given simulator-generated dataset sim(s) = (i s 1 , θ s 1 ), . . . , (i s n , θ s n ) corresponding to a scenario s ∈ S, we compute r (x,l) using the following three conditions: where argmin x f (x) returns 2 x minimizing f (x), and is a small threshold on the average steering angle difference between sim(s) and r (x,l) . We say datasets sim(s) and r (x,l) are comparable if and only if r (x,l) satisfies the three above conditions (i.e., 1, 2 and 3). Given the above formalization, our approach to replicate the real-life dataset r using our simulator can be summarized as follows: In the first step, we randomly generate a set of many scenarios S based on the reduced domain model. In the second step, for every scenario s ∈ S, we identify a subsequence r (x,l) |r such that sim(s) and r (x,l) are comparable. If is too large, we may find r (x,l) whose steering angles are too different from those in sim(s). On the other hand, if is too small, we may not able to find r (x,l) that is comparable to sim(s) for many randomly generated scenarios s ∈ S in the first step. In our experiments, we select = 0.1 (2.5°) since, based on our preliminary evaluations, we can achieve an optimal balance with this threshold. For each comparable pair sim(s) and r (x,l) , we measure and compare the prediction errors, i.e., MAE (d, sim(s)) and MAE (d, r (x,l) ) of a DNN d. Recall that offline testing results for a given DNN d are measured based on prediction errors in terms of MAE. If |MAE (d, sim(s)) − MAE (d, r (x,l) )| ≤ 0.1 (meaning 2.5°of average prediction error across all images), we say that r (x,l) and sim(s) yields consistent offline testing results for d. 2) Results: Among the 100 randomly generated scenarios (i.e., |S| = 100), we identified 92 scenarios that could match subsequences of the Udacity real-life test dataset. Figure 6a shows the steering angles for an example comparable pair sim(s) and r (x,l) in our experiment, and Figure 6b and 6c show two example matching frames from r (x,l) (i.e., real dataset) and sim(s) (i.e., simulator-generated dataset), respectively. As shown in the steering angle graph in Figure 6a, the simulator-generated dataset and its comparable real dataset subsequence have very similar steering angles. Note that the actual steering angles issued by a human driver have natural fluctuations while the steering angles generated by the simulator are very smooth. Also, the example matching images in Figure 6b and 6c look quite similar. Figure 7 shows the distributions of the differences between the prediction errors obtained for the real datasets (subsequences) and the simulator-generated datasets for each of our DNNs, Autumn and Chauffeur. For Autumn, the average prediction error difference for the real datasets and the simulator-generated datasets is 0.027. Further, 96.7% of the comparable pairs show a prediction error difference below 0.1 (2.5°). This means that the (offline) testing results obtained for the simulator-generated datasets are consistent with those obtained using the real-world datasets for almost all comparable dataset pairs. On the other hand, for Chauffeur, 68.5% of the comparable pairs show a prediction error difference below 0.1. This means that the testing results between the real datasets and the simulator-generated datasets are inconsistent 2 If f has multiple points of the minima, one of them is randomly returned. in 31.5% of the 92 comparable pairs. Specifically, for all of the inconsistent case, we observed that the MAE value for the simulator-generated dataset is greater than the MAE value for the real-world dataset. It is therefore clear that the prediction error of Chauffeur tends to be larger for the simulatorgenerated dataset than the real-world dataset. In other words, the simulator-generated datasets tend to be conservative for Chauffeur and report more false positives than for Autumn in terms of prediction errors. We also found that, in several cases, Chauffeur's prediction errors are greater than 0.2 while Autumn's prediction errors are less than 0.1 for the same simulator-generated dataset. One possible explanation is that Chauffeur is over-fitted to the texture of real images, while Autumn is not thanks to the image preprocessing module. Nevertheless, the average prediction error differences between the real datasets and the simulator-generated datasets is 0.079 for Chauffeur, which is still less than 0.1. This implies that, although Chauffeur will lead to more false positives (incorrect safety violations) than Autumn, the number of false positives is still unlikely to be overwhelming. We remark that the choice of simulator as well as the way we generate data using our selected simulator, based on carefully designed experiments such as the ones presented here, are of great importance. Selecting a suboptimal simulator may lead to many false positives (i.e., incorrectly identified prediction errors) rendering simulator-generated datasets ineffective. The answer to RQ0 is that the prediction error differences between simulator-generated datasets and real-life Fig. 7. Distributions of the differences between the prediction errors obtained for the real datasets (subsequences) and the simulator-generated datasets datasets is less than 0.1, on average, for both Autumn and Chauffeur. We conclude that we can use simulatorgenerated datasets as a reliable alternative to real-world datasets for testing DNNs. 1) Setup: We aim to compare offline and online testing results in this research question. We randomly generate 50 scenarios and compare the offline and online testing results for each of the simulator-generated datasets. For the scenario generation, we use the extended domain model (see Figure 4) to take advantage of all the feasible features provided by the simulator. Specifically, in Figure 4, the gray-colored entities and attributes in bold are additionally included in the extended domain model compared to the restricted domain model used for RQ0. For example, the (full) domain model contains various weather conditions, such as rain, snow, and fog, in addition to sunny. Let S be the set of randomly generated scenarios based on the (full) domain model. For each scenario s ∈ S , we prepare the simulator-generated dataset sim(s) for offline testing and measure MAE (d, sim(s)). For online testing, we measure MDCL(d, s). Since MAE and MDCL are different metrics, we cannot directly compare MAE and MDCL values. To determine whether the offline and online testing results are consistent or not, we set threshold values for MAE and MDCL. If MAE (d, sim(s)) < 0.1 (meaning the average prediction error is less than 2.5°) then we interpret the offline testing result of d for s as acceptable. On the other hand, if MDCL(d, s) < 0.7 (meaning that the departure from the centre of the lane observed during the simulation of s is less than around one meter), then we interpret the online testing result of d for s as acceptable. If both offline and online testing results of d are consistently (un)acceptable, we say that offline and online testing are in agreement regarding testing d for s. 2) Results: Figure 8 shows the comparison between offline and online testing results in terms of MAE and MDCL values for all the randomly generated scenarios in S where |S | = 50. The x-axis is MAE (offline testing) and the y-axis is MDCL (online testing). The dashed lines represent the thresholds, i.e., 0.1 for MAE and 0.7 for MDCL. Table III provides the number of scenarios classified by the offline and online testing results based on the thresholds. The results show that offline testing and online testing are not in agreement for 44% and 34% of the 50 randomly generated scenarios for Autumn and Chauffeur, respectively. Surprisingly, offline testing is always more optimistic than online testing for the disagreement scenarios. In other words, there is no case where the online testing result is acceptable while the offline testing result is not. Figure 9 shows one of the scenarios on which offline and online testing disagreed. As shown in Figure 9a, the prediction error of the DNN for each image is always less than 1°. This means that the DNN appears to be accurate enough according to offline testing. However, based on the online testing result in Figure 9b, the ego vehicle departs from the center of the lane in a critical way (i.e., more that 1.5 m). This is because, over time, small prediction errors accumulate, eventually causing a critical lane departure. Such accumulation of errors over time is only observable in online testing, and this also explains why there is no case where the online testing result is acceptable while the offline testing result is not. The experimental results imply that offline testing cannot Considering the fact that detecting safety violations in ADS is the ultimate goal of ADS-DNN testing, we conclude that online testing is preferable to offline testing for ADS-DNNs. The answer to RQ1 is that offline and online testing results differ in many cases. Offline testing is more optimistic than online testing because the accumulation of errors is not observed in offline testing. D. Threats to Validity We propose a two-step approach that builds simulatorgenerated datasets comparable to a given real-life dataset. While it achieves its objective, as shown in Section III-B2, the simulated images are still different from the real images. However, we confirmed that the prediction errors obtained by applying our subject DNNs to the simulator-generated datasets are comparable with those obtained for their corresponding real-life datasets. Thus, the experimental results that offline and online testing results often disagree with each other are valid. We used a few thresholds that may change the experimental results quantitatively. To reduce the chances of misinterpreting the results, we selected intuitive and physically interpretable metrics directly to evaluate both offline and online test results (i.e, prediction errors and safety violations), and defined threshold values based on common sense and experience. Further, adopting different threshold values, as long as they are within a reasonable range, does not change our findings. For example, if we use MAE (d , sim(s)) < 0.05 as a threshold in offline testing results instead of MAE (d , sim(s)) < 0.1, the numbers of scenarios in Table III change. However, it does not change the fact that we have many scenarios for which offline and online testing results disagree, nor does it change the conclusion that offline testing is more optimistic than online testing. Though we focused, in our case study, on lane-keeping DNNs (steering prediction)-which have rather simple structures and do not support braking or acceleration, our findings are applicable to all DNNs in the context of ADS as long as the closed-loop behavior of ADS matters. Table IV summarizes DNN testing approaches specifically proposed in the context of autonomous driving systems. Approaches to the general problem of testing machine learning systems are discussed in the recent survey by Zhang et al. [12]. IV. RELATED WORK In Table IV, online testing approaches are highlighted in grey. As indicated in Table I, offline testing approaches focus on DNNs as individual units without accounting for the closedloop behavior of a DNN-based ADS. Most of them aim to generate test data (either images or 3-dimensional point clouds) that lead to DNN prediction errors. Dreossi et al. [27] synthesized images for driving scenes by arranging basic objects (e.g., road backgrounds and vehicles) and tuning image parameters (e.g., brightness, contrast, and saturation). Pei et al. [7] proposed DEEPXPLORE, an approach that synthesizes images by solving a joint optimization problem that maximizes both neuron coverage (i.e., the rate of activated neurons) and differential behaviors of multiple DNNs for the synthesized images. Tian et al. [8] presented DEEPTEST, an approach that generates label-preserving images from training data using greedy search for combining simple image transformations (e.g., rotate, scale, and for and rain effects) to increase neuron coverage. Wicker et al. [29] generated adversarial examples, i.e., small perturbations that are almost imperceptible by humans but causing DNN misclassifications, using feature extraction from images. Zhang et al. [9] presented DEEP-ROAD, an approach that produces various driving scenes and weather conditions by applying Generative Adversarial Networks (GANs) along with corresponding real-world weather scenes. Zhou et al. [32] combined Metamorphic Testing (MT) and Fuzzing for 3-dimensional point cloud data generated by a LiDAR sensor to reveal erroneous behaviors of an object detection DNN. Zhou et al. [11] proposed DEEPBILLBOARD, an approach that produces both digital and physical adversarial billboard images to continuously mislead the DNN across dashboard camera frames. While this work is different from the other offline testing studies as it introduces adversarial attacks through sequences of frames, its goal was still the generation of test images to reveal DNN prediction errors. In contrast, Kim et al. [18] defined a coverage criterion, called surprise adequacy, based on the behavior of DNN-based systems with respect to their training data. Images generated by DEEPTEST were sampled to improve such coverage and used to increase the accuracy of the DNN against adversarial examples. Online testing studies exercise the ADS closed-loop behavior and generate test driving scenarios that cause safety violations, such as unintended lane departure or collision with pedestrians. Tuncali et al. [28] were the first to raise the problem that previous works mostly focused on the DNNs, Dreossi et al. [27] 2017 Offline Object detection Test image generation by arranging basic objects using greedy search Pei et al. [7] 2017 Offline Lane keeping Coverage-based label-preserving test image generation using joint optimization with gradient ascent Tian et al. [8] 2018 Offline Lane keeping Coverage-based label-preserving test image generation using greedy search with simple image transformations Tuncali et al. [28] 2018 Online Object detection Test scenario generation using the combination of covering arrays and simulated annealing Wicker et al. [29] 2018 Offline Traffic sign recognition Adversarial image generation using feature extraction Zhang et al. [9] 2018 Offline Lane keeping Label-preserving test image generation using Generative Adversarial Networks (GANs) Zhou et al. [11] 2018 Offline Lane keeping Adversarial billboard-image generation for digital and physical adversarial perturbation Gambi et al. [30] 2019 Online Lane keeping Automatic virtual road network generation using search-based Procedural Content Generation (PCG) Kim et al. [18] 2019 Offline Lane keeping Improving the accuracy of DNNs against adversarial examples using surprise adequacy Majumdar et al. [31] 2019 Online Object detection, lane keeping Test scenario description language and simulation-based test scenario generation to cover parameterized environments Zhou et al. [32] 2019 Offline Object detection Combination of Metamorphic Testing (MT) and fuzzing for 3dimensional point cloud data This paper 2019 Offline and online Lane keeping Comparison between offline and online testing results without accounting for the closed-loop behavior of the system. Gambi et al. [30] also pointed out that testing DNNs for ADS using only single frames cannot be used to evaluate closed-loop properties of ADS. They presented ASFAULT, a tool that generates virtual roads which cause self-driving cars to depart from their lane. Majumdar et al. [31] presented a language for describing test driving scenarios in a parametric way and provided PARACOSM, a simulation-based testing tool that generates a set of test parameters in such a way as to achieve diversity. We should note that all the online testing studies rely on virtual (simulated) environments, since, as mentioned before, testing DNNs for ADS in real traffic is very dangerous and expensive. Further, there is a growing body of evidence demonstrating that simulation-based testing is effective at finding violations. For example, recent studies for robotic applications show that simulation-based testing of robot function models not only reveals most bugs identified during outdoor robot testing, but that it can additionally reveal several bugs that could not have been detected by outdoor testing [33]. In summary, even though online testing has received more attention recently, most existing approaches to testing DNN in the context of ADS focus on offline testing. We note that none of the existing techniques compare offline and online testing results, and neither do they demonstrate relative effectiveness of test datasets obtained from simulators compared to those captured from real-life. V. CONCLUSION In this paper, we distinguish two general modes of testing, namely offline testing and online testing, for DNNs developed in the context of Advanced Driving Systems (ADS). Offline testing search for DNN prediction errors based on test datasets obtained independently from the DNNs under test, while online testing focuses on detecting safety violations of a DNNbased ADS in a closed-loop mode by testing it in interaction with its real or simulated application environment. Offline testing is less expensive and faster than online testing but may not be effective at finding significant errors in DNNs. Online testing is more easily performed and safer with a simulator but we have no guarantees that the results are representative of real driving environments. To address the above concerns, we conducted a case study to compare the offline and online testing of DNNs for the end-to-end control of a vehicle. We also investigated if we can use simulator-generated datasets as a reliable substitute to real-world datasets for DNN testing. The experimental results show that simulator-generated datasets yield DNN prediction errors that are similar to those obtained by testing DNNs with real-world datasets. Also, offline testing appears to be more optimistic than online testing as many safety violations identified by online testing were not suggested by offline testing prediction errors. Furthermore, large prediction errors generated by offline testing always led to severe safety violations detectable by online testing. Such results have important practical implications for DNN testing in the context of ADS. As part of future work, we plan to develop an approach that effectively combines both offline and online testing to automatically identify critical safety violations. We also plan to investigate how to improve the performance of DNN-based ADS using the identified prediction errors and safety violations for further learning.
2019-11-28T16:54:36.000Z
2019-11-28T00:00:00.000
{ "year": 2019, "sha1": "71cd52764c790488224e3219df33cf165928f70e", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1912.00805", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "71cd52764c790488224e3219df33cf165928f70e", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
252496902
pes2o/s2orc
v3-fos-license
A computational insight on the inhibitory potential of 8‐Hydroxydihydrosanguinarine (8‐HDS), a pyridone containing analogue of sanguinarine, against SARS CoV2. The unprecedented global pandemic of COVID‐19 has created a daunting scenario urging an immediate generation of therapeutic strategy. Interventions to curb the spread of viral infection primarily include setting targets against the virus. Here in this study we target S protein to obstruct the viral attachment and entry and also the M pro to prevent the viral replication. For this purpose, the interaction of S protein and M pro with phytocompounds, sanguinarine and eugenol, and their derivatives were studied using computational tools. Docking studies gave evidence that 8‐Hydroxydihydrosanguinarine, a derivative of sanguinarine, showed maximum binding affinity with both the targets. The binding energies of the ligand with S protein and M pro scored to be ΔGb ‐9.4 Kcal/mol and ΔGb ‐10.3 Kcal/mol respectively. MD simulation studies depict that the phytocompound could effectively cause structural perturbations in the targets which would affect their functions.8‐Hydroxydihydrosanguinarine distorts the α‐helix in the secondary structure of M pro and RBD site of S protein. Protein‐protein interaction study in presence of 8‐hydroxydihydrosanguinarine (8‐HDS) also corroborate the above findings which indicate that this polyphenol interfere in the coupling of S Protein and ACE2. The alterations in protonation of M pro suggest that the protein structure undergoes significant structural changes at neutral pH. ADME (Physicochemical, Lipophilicity, Water Solubility, Pharmacokinetics, Drug‐likeness) property of 8‐ hydroxydihydrosanguinarine indicates this could be a potential drug. This makes the phyto‐alkaloid a possible therapeutic molecule for antiCOVID‐19 drug design. Introduction The ominous COVID-19 has gripped the globe with panic and distress. The coronavirus, also known as SARS-CoV2 because of around 79.5% genomic identity with the RNA of SARS CoV [1] , belongs to genus betacoronavirus. [2][3][4] Due to its high rate of transmission and unavailability of specific therapy; it was proclaimed as a pandemic by WHO. SARS-CoV2, the enveloped, positive sense, single stranded RNA virus, causes respiratory infections in humans. [5] The single stranded genomic RNA, ~30 kb in length, [6] comprises at least 6 open reading frames (ORFs) along with 5' cap structure and 3' poly A tail. [7] The first ORF occupying about two-third of the genome length encodes two translational products, polyproteins -pp1a and pp1ab [8] which mediates viral replication and transcription. The viral expression is coordinated by a highly complex proteolytic processing cascade. [9] The M pro, main protease (also known as 3 CL-pro) plays a pivotal role in processing these polyproteins into 16 mature non-structural proteins (nsp). [8] The nsps are employed for the production of sub genomic RNAs which are required for synthesis of the structural proteins i.e. envelope (E), spike (S), membrane (M), and nucleocapsid (N) proteins and other accessory proteins. Various X-Ray crystallographic studies [7,10] depict that M pro comprising protomers a and b consists of 306 amino acid residues. Each protomer consists of three domains, domain-I (8 -99 aa residues), domain-II (100 -183 aa residues) and domain-III (200 -306 aa residues). A connecting loop, between domain II and III, spans from 184 to 199 residues. The N-terminal or N finger (1-7 residues) is responsible for the proteolytic activity [11,12] whereas, the Cterminal domain III of M pro plays a major role in the dimerization of protomers a and b. [13] The establishment of SARS-CoV in the host cell involves proteolytic processing events mediated through M pro which direct gene expression and replication. [11] There are about 11 cleavage sites of M pro on the larger polyprotein pp1ab with Leu-Gln↓Ser, Ala, Gly as recognition sequence. Thus inhibiting this protease activity would block viral replication. [14] The secondary structure of M protease as predicted from SOPMA (Self Optimised Prediction Method with Alignment) showed that the M protease is composed of 306 amino acid residues consisting of 89 α helices (29.08%), 35 βturns (11.44%), 99 random coil (32.35%) and 83 extended strands (27.12%). It was revealed through ExPASy ProtParam that there are 26 negatively charged (Asp + Glu) and 22 positively charged residues (Arg + Lys). The aliphatic index was determined as 82.12. The GRAVY (Grand Average of Hydropathicity) was found to be -0.019. The instability index was scored to be 27. 65. These characteristics factually support that the protein is stable. The estimated half-lives of M protease in mammalian reticulocytes, yeast and Escherichia coli on a comparative analysis were determined as 1.9 hours, 20 hours and 10 hours, respectively. (Table ST3). Structure alignment TM-align (https://zhanglab.ccmb.med.umich.edu/TM-align/) structures of M pro of SARS-CoV2 and SARS-CoV were superimposed for comparative analysis of structures. Due to maximum sequence similarity, these two viral strains were taken for Structure-Structure superimposition. From structural alignment studies, it was inferred that M protease of SARS-CoV2 differs from SARS-CoV in only 12 amino acids which comprises 6 mutations in Domain-I, 1 mutation in Domain-II, 4 mutations in Domain-III and 1 mutation in loop region in between Domain-II and Domain-III. Thus it was deduced from the structure alignment and sequence alignment that SARS-CoV is closely related to SARS-CoV2 ( Figure 1). 10.1002/cbdv.202200266 Chemistry & Biodiversity This article is protected by copyright. All rights reserved. S protein-ACE2 interaction in presence of 8-HDS The best 10 docking models with different free energies were obtained from the ClusPro web-server. The total RMSD value was taken as criteria for grouping. [31] Our study analysed Chemistry & Biodiversity This article is protected by copyright. All rights reserved. Molecular Simulation analysis As the best docking results were observed in case of 8-HDS with both S protein and M protease, it was taken further for molecular simulations study. Interaction of S protein with 8-hydroxydihydrosanguinarine The average change in displacement of atoms for all frames with respect to the reference frame in trajectory, i.e. protein ligand Root Mean Square Deviation (RMSD) is stable after 40 ns which was recorded to be 2.5 Å ( Figure S8) in the 'S protein-8-HDS' complex. Such changes suggest that the compound, 8-HDS is capable of causing significant conformational changes in the protein during simulation. The plot illustrates, that the RMSD of ligand is smaller than that of protein, which suggests that compound has not diffused away the binding site. The structural changes in the protein as determined through RMSF plot reveal that maximum fluctuations occur between 300-500 amino acid residues ( Figure S4). It is known to us that S protein's RBD site spans from 319 amino acids to 591 amino acid residues. [32] Therefore it Chemistry & Biodiversity This article is protected by copyright. All rights reserved. ligand binding. Consideration of these bonds is quintessential in drug design as they influence drug specificity, metabolisation and absorption. His1058, Ser730, and Thr778 are seen to interact with ligand over the major time course of 100 ns trajectory. The hydroxyl group of pyridone ring in 8-HDS is found to be in association with the His1058 of the protein for 52% of the 100 ns trajectory. Thr778 and Ser730 interact with the ligand though water bridges for 52% and 33% of the trajectory, respectively ( Figure S5). Interaction of M protease with 8-hydroxydihydrosanguinarine The Root Mean Square Deviation (RMSD) of the M pro -8-HDS complex was found to be 1.25 Å ( Figure 2). Though the protein ligand interaction occurs over 100 ns simulation time yet substantial interaction was observed in the first 10 ns and in between 85-100 ns. The RMSF exhibited the highest peak near the 50 amino acid residues which is an α-helix region ( Figure S3). As α-helix is known to determine the functional aspects, perturbations in this region make the protein feeble in its action. Generally the fluctuations appear in the N and the C-terminal ends but peaks appearing all over the RMSF plot ensure that the compound is capable of destabilising the protein by causing conformational changes. Water Table ST5). pK a and pK a shift values remain constant for S-protein as well as S-protein-8-HDS complex. Figure 5 Comparative study of pK a amino acid residues of M pro in presence and absence of 8-HDS. (Table ST4). Discussion In this study we have primarily analysed the molecular interactions of S protein and M protease with 8-HDS through an in silico approach. Inspecting the sequential and structural alignment through various computational tools enhances our concept for better understanding of SARS-CoV2. Here it was inferred from sequence similarity study that M protease of SARS-CoV2 shows 96% sequential identity with that of SARS-CoV. Thus the sequence similarities confirm that SARS-CoV2 is a descendant from SARS-CoV. Evolutionary studies state that SARS-CoV2 is more closely related to SARS-CoV which had infected several people in the year 2003 with ~10% fatality rate than MERS-CoV that had caused infection in 2012 with a fatality rate of ~36%. [33] Because of their potential capability to fit into different environments and susceptibility to recombination and mutation, Coronaviruses can deliberately adapt to altered host range and tissue tropism. [34] The major concern now is to contain the spread of virus. Blocking the viral infection in cells through M protease inhibition and targeting the viral spike protein to prevent its attachment and entry into host cells can serve as approaches to combat against the virus. These present phytocompounds, eugenol and sanguinarine, as well as their derivatives are to be employed for the above purpose. Chemistry & Biodiversity This article is protected by copyright. All rights reserved. cytotoxic, antitumor, [35] anti-inflammatory [40] and antibacterial [41] properties. This compound was tested against viral isolates of different strains of the Herpes virus (HSV) providing protection in varying degrees. [42][43][44] Eugenol also exhibits antiviral properties against Herpes Simplex virus type 1 and 2, feline calicivirus, tomato yellow leaf curl virus, Influenza A virus, and four airborne phages. [45] Extracted from the roots of Sanguinaria canadensis and other poppy-fumaria species, sanguinarine bears antitumor, [46] anti-bacterial, [47,48] antifungal [49] and anti-inflammatory properties [50] and is known to inhibit neutrophil function such as degranulation and phagocytosis in vitro. [51] This benzophenanthridine alkaloid serves as inhibitor of protein kinase C bearing structural homology with chelerythrine. [52] 8-HDS possesses a pyridone ring which bears the capacity to inhibit viral enzymes essential for its replication. Due to such antiviral properties of pyridones, they can be taken as ideal molecules for drug development. [29] Zhang et al., [14] have also suggested that pyridone-containing inhibitors can interacts with ACE2 receptor of human cell for the viral infection. [19] Structural fluctuation at RBD site may inhibit the binding of S protein and ACE2 receptor, which eventually prevent Therefore, it can be presumed that 8-HDS is capable of hindering the attachment of RBD site of S Protein to the ACE2 receptor protein (Figures 8, 9). This would indeed pave a way for the utilisation of 8-HDS in repurposing/design of effective therapy to prevent the viral entry. Chemistry & Biodiversity This article is protected by copyright. All rights reserved. The hydrophobic amino acids are the main dynamic force in protein folding. In general, proteins become functional after being folded into specific globular structure. During protein folding, hydrophobic amino acids get buried in the core of the protein to get protected from water, leading to the protein fold stable. [53,54] Phe8, Ile152, Phe294, Val297, Val303 hydrophobic Amino acids of M Pro generally interact with 8-HDS during the 100 ns simulation which caused instability in the protein ( Figure S10). In some recent studies, many natural compounds have been evaluated via molecular docking to analyse the binding affinity of these compounds with CoV proteins with respect to antiviral drugs. This serves as a prospective to evaluate the likelihood of these molecules as drug candidates. Sampangi-Ramaiah et al., [21] have assessed binding affinities of 27 natural products towards proteases of COVID-19, out of which 15 compounds showed permissible results i.e. they possess more binding energy than the threshold -6.0 kcal/mol. They also observed the highest affinities in Glabridin (-8.0 kcal/mol) and Glucobrassin (-8.1 kcal/mol) which were reasonable to compare with Sanquinavir (-9.2 kcal/mol), the synthetic antiviral drug, as a positive control. Similarly, Suravajhala et al., [22] have conducted molecular docking of 14 drug candidates with SARS-CoV2 protein in which curcumin showed good binding 10.1002/cbdv.202200266 Meanwhile, in our present study, the anticipating drug molecule, 8-HDS shows even greater binding affinities towards the S protein (-9.4 kcal/mol) and M pro (-10.3 kcal/mol). This definitely renders the molecule more likelihood to be accepted as an anti-CoV drug. Molecular simulation analysis corroborates docking studies revealing that 8-HDS can efficiently destabilise the S protein and M protease thereby inhibiting their functions. In a more recent study, Zhang et al., [14] has opined that pyridone containing ligand can potentially For a molecule to be an effective drug, it needs to reach the target in optimised concentration and be available in bioactive form till the necessary biological events occur. The SwissADME technology makes the process of drug discovery with less time and resource consuming. Appraisal of the structural or physicochemical properties of development compounds for drug-likeness is enough to consider it as an oral drug-candidate. [55] Druglikeness of a molecule is evaluated with respect to bioavailability by qualitatively examining the probability of the molecule to be developed into an oral drug. The pink core area of bioavailability Radar representing lipophilicity, size, polarity, solubility, saturation and flexibility define the optimal range of properties for drug-likeness of the input molecule, 8-HDS ( Figure 10, Table ST4). Due to zero rotatable bonds 8-HDS have no Bioavailability Radar in the flexibility region. The BOILED-Egg model [56] predicts easy penetration of 8-HDS through blood-brain barrier (BBB) and human gastrointestinal absorption (HIA). From the Physicochemical, Lipophilicity, Water Solubility, Pharmacokinetics, Drug-likeness properties of 8-HDS, the study arrives at a conclusion that 8-HDS could be a potential drug. The resulting changes in protonation of M pro are in good agreement that the protein structure undergoes significant structural changes at neutral pH. [57] Our study primarily focuses on 8-HDS which can be a promising drug molecule against SARS-CoV2 due to the presence of a pyridone ring. This phytocompound is known for its antiviral properties and shows drug likeness, structural alteration, binding affinity and molecular interaction. These findings infer that 8-HDS could serve as an effective and potential drug molecule. 10.1002/cbdv.202200266 Chemistry & Biodiversity This article is protected by copyright. All rights reserved. acquired from PubChem database (https://pubchem.ncbi.nlm.nih.gov/). CHIMERA 1.11.2 [58] programme was used for the conversion of 3D structures. Binding affinity of M pro and S protein with derivatives of sanguinarine and eugenol was estimated using AutoDock Vina1.1.2. [59] Various parameters such as binding affinity, receptors interacting atom, receptor pocket atom, receptor ligand interaction site, atomic contact energy (ACE) and side amino acid residues were studied to recognise the binding site of M pro. Virtual screening of sanguinarine and eugenol derivatives was conducted on the basis of molecular interaction. The grid box was constructed using 108, 111, and 126, pointing in x, y, and z directions, respectively, with a grid point spacing of 0.508 Å. outcomes. [61] The native site is assumed to possess a wide range of free-energies to draw greater number of results. Initially the sample was taken for about 10 9 positions of the ligand with respect to the receptor. Out of these, only the top 10 3 positions were selected among all relative ligand positions in correspondence to the receptor. Molecular simulation The ligand 8-HDS was extracted in chemically unstandardized 2D structures from PubChem database (https://pubchem.ncbi.nlm.nih.gov/). LigPrep [62] was used to standardise the ligand files, lower energy and extrapolated 3D structures which were virtually screened by Glide. [63] The 'M protein -8-HDS' and 'S protein-8-HDS' complex was formed using Grid-based Ligand Docking with Energetics (GLIDE) module of Schrodinger software. Molecular dynamics simulations were determined using Desmond [64] software. Root Mean Square Deviations (RMSD) and atomic fluctuation were studied through Root Mean Square Fluctuation (RMSF) studies. Different simulation boxes and tools such as cubic, orthorhombic, truncated octahedron and rhombic dodecahedron were taken up for precisely directing solvent simulations with periodic marginal conditions. An 8-staged stabilization run was conducted prior to 100 ns production run. Beginning primarily with task, then followed by simulations with NVT at T = 10 K and small time steps in Brownian Dynamics and restraints on solute heavy atoms for 100 ps. The third stage included repetition of the above stage but with restraints on solute heavy atoms for 12 ps, the following 4th stage was also carried out in a similar manner with NPT instead of NVT. In stage 5, solvate pockets were focussed. Similar to stage 4, the stage 6 was carried out. The next stage was concerned with simulation at NPT for 24 ps with no restraints. Ultimately, simulations were done. In this study, Molecular simulations were performed specifically for the top two identified hits to study the stability of the ligand receptor complex for 100 ns. Stability of docked complexes of 'S protein -8-HDS' and 'M pro -8-HDS' till 100 ns simulation time was checked using system builder of Desmond implemented in Maestro. [65] The system for 'S Protein-8-HDS' and 'M pro-8-HDS' were immersed in a water filled cubic box of 10 Å spacing containing 64002 and 10297 water molecules, respectively, with system builder of the Desmond in the Maestro program using extended simple point charge (SPC).
2022-09-25T06:18:04.236Z
2022-09-24T00:00:00.000
{ "year": 2022, "sha1": "d98ab047d749bfe89069ab9696fdad58776555c0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "c47b0515122041f711b9b87b391afa8888b3f0b0", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
17055796
pes2o/s2orc
v3-fos-license
A Modified Least Mean Square Method Applied to Frequency Relaying Adaptive is useful in any application where the signals or the modeled system vary over time. The configuration of the system and, in particular, the position where the adaptive processor is placed generate different areas or application fields such as: prediction, system identification and modeling, equalization, cancellation of interference, etc. which are very important in many disciplines such as control systems, communications, signal processing, acoustics, voice, sound and image, etc. The book consists of noise and echo cancellation, medical applications, communications systems and others hardly joined by their heterogeneity. Each application is a case study with rigor that shows weakness/strength of the method used, assesses its suitability and suggests new forms and areas of use. The problems are becoming increasingly complex and applications must be adapted to solve them. The adaptive filters have proven to be useful in these environments of multiple input/output, variant-time behaviors, and long and complex transfer functions effectively, but fundamentally they still have to evolve. This book is a demonstration of this and a small illustration of everything that is to come. Introduction In an Electrical Power System (EPS), a fast and accurate detection of faulty or abnormal situations by the protection system are essential for a faster return to the normal operation condition. With this objective in mind, protective relays constantly monitor the voltage and current signals, including their frequency. The frequency is an important parameter to be monitored in an EPS due to suffer significant alterations during a fault or undesired situations. In practice, the equipment are designed to work continuously between 98% and 102% of nominal frequency (IEEE Std C37.106, 2004). However, variations on these limits are constantly observed as a consequence of the dynamic unbalance between generation and load. The larger variations may indicate fault situations as well as a system overload. Considering the latter, the frequency relay can help in the load shedding decision and, consequently, in the power system stability. In this way, a prerequisite for stable operation has become more difficult to maintain considering the large expansion of electrical systems (Adanir, 2007;Concordia et al., 1995). The importance of correct frequency estimation for EPS is then observed, especially if the established limits for its normal operation are not reached. This can cause serious problems for the equipment connected to the power utility, such as capacitor banks, generators and transmission lines, affecting the power balance. Therefore, frequency relays are widely used in the system to detect power oscillations outside the acceptable operation levels of the EPS. Due to the technological advances and considerable increase in the use of electronic devices of the last decades, the frequency variation analyses in EPS were intensified, since the modern components are more sensitive to this kind of phenomenon. Taking this into account, the study of new techniques for better and faster power system frequency estimation has become extremely important for a power system operation. Thus, some researchers have proposed different techniques to solve the frequency estimation problem. Algorithms based on the phasor estimation, using the LMS method, the Fast Fourier Transform (FFT), intelligent techniques, the Kalman Filter, the Genetic Algorithms, the Weighted Least Square (WLS) technique, the three-phase Phase-Locked Loop (3PLL) and the Adaptive Notch Filter (Dash et al., 1999;1997;El-Naggar & Youssed, 2000;Girgis & Ham, 1982;Karimi-Ghartemani et al., 2009;Kusljevic et al., 2010;Mojiri et al., 2010;Phadke et al., 1983;Rawat & Parthasarathy, 2009;Sachdev & Giray, 1985). The adaptive filter based on the The algorithm based on LMS The algorithm based on the LMS method, presented in Fig. 1, is a combination of the adaptive process with digital filtering. In this Figure, T is the vector with the filter coefficients; y(n) is the desired signal (the output filter) and e(n) is the error associated to the filter approximation. The input signal of the filter can be estimated by minimizing the squared error by the coefficient adaptations (w(n)), which are recursively adjusted to obtain optimal values. At each iteration, the coefficients can be calculated by: where μ is the convergence parameter and ∇ is the gradient of error performance surface and is responsible for determining the adjustment of coefficients. The LMS algorithm is very sensitive to μ. This can be mainly observed by the speed of the estimation and processing time. The lesser μ value, the longer the time to reach the aimed error and vice-versa. However, it is important to respect the convergence interval given by (Haykin, 2001): where N is the filter size and S max is the maximum power spectral density value of the input signal. The adaptive algorithm and the frequency estimation The study of digital filters is a consolidated research area. Regarding digital protection, digital filters provide the frequency component extraction used in digital relay algorithms. The information contained in the input data from a three-phase system can be processed simultaneously, making it possible to obtain more precise results if compared to conventional methods. It must be highlighted that the proposed algorithm, called Frequency Estimation Algorithm by the Least Mean Square (FEALMS), uses three-phase signals as inputs. It is considered that the three-phase voltages from EPS can be represented by: where, A max is the peak value; ω is the signal angular frequency 1 ; n is the sample number of the discrete signal; Δt is the time between two consecutive samples; φ is the signal phase and ξ n is the error between two consecutive samples. Fig. 2 illustrates the proposed relay algorithm. The data acquisition All the stages in data acquisition are performed with the aim of having a more realistic analysis of the obtained results. The input voltage signals simulated are characterized by a high sampling rate in order to represent the analog signals more realistically. A flowchart that represents the procedure of data acquisition can be visualized by means of Fig. 3. A second order Butterworth low pass filter with a cut-off frequency of 200Hz was utilized. A sample rate of 1, 920Hz and an analog-to-digital converter (ADC) of 16 bits were also used. The low pass filter was used to avoid spectral spreading and to make sure that the digital representation after ADC conversion is a good representation of the original signal. It is worth commenting that a low cut-off frequency of 200Hz was used in order to stabilise the method and ensure that the LMS algorithm will converge. Due to this situation, most of the harmonic components were eliminated, increasing the precision of the proposed method. 1 ω = 2π f ,where f is the fundamental frequency of the electrical system The data acquisition was performed in a moving window with one sample step. All the filter processing should be performed on one data window, respecting the available time for processing, which is the time between two consecutive samples. Fig. 4 illustrates this process. The normalization process Normalization standardises the data obtained from the electrical system, regardless of the voltage level analysed. Consequently, if either a sag or a swell occurs in any phase, the algorithm will maintain its estimation without loss of precision or speed. Fig. 5 illustrates the normalisation process implemented. The pre-processing process After normalisation, a pre-processing stage was performed, obtaining the signal in its complex form for the digital filter. This was obtained by applying the αβ-Transform on the three-phase voltages, as represented in the following equation (Akke, 1997): 368 Adaptive Filtering Applications www.intechopen.com After the pre-processing stage, in order to obtain the α and β components by (4), the complex voltage is defined as: The coefficient generator Adapting the filter coefficients is simple and inherent to the algorithm. This adjustment is performed sample by sample in order to make sure that the squared average error is minimised. However, to improve the algorithm performance and minimise the processing time, the estimation filter coefficients (w(n)) are initialised with the estimation of the previous window (Barbosa et al., 2010). The first window is initialised with the fundamental frequency of the electrical system. The aim of this procedure is to increase the speed of the estimation process.The coefficient generator flowchart is shown in Fig. 6. Pre-processing New data window? The adaptive filter In the adaptive filter, the coefficients are updated recursively to minimise the squared error. The error is calculated as the difference between desired and estimated values, given by: ( 6 ) where y(n) represents the estimated value. The complex voltage (u(n)) can be described by: where U max is the amplitude of the complex signal, ζ is the noise component, ΔT is the sampling interval, φ is the phase of signal, n is the sample number and ω is the angular frequency of the analyzed signal. The estimated complex voltage (y(n)) can be represented by equation (8). ( 8 ) The equations (8) and (7) are the base of used model for proposed frequency estimation. Although the output filter can be represented by previous model, it is a linear combination between the input vector, lagged by one sample, and the vector with filter coefficients as illustrated below. where H is the Hilbert transform andw is the vector with filter coefficients. This vector denotes the difference between two consecutive samples, ie, the phase difference between the samples being analyzed, according to the equation (10). where,ω is the estimated angular frequency. It must be emphasised that the LMS task is to find the filter coefficients that minimize the error. Following this procedure, the filter coefficients are updated until the error is sufficiently small. The complex weight vector at each sampling instant is given by (Widrow et al., 1975): where the ( * ) symbol denotes the complex conjugate and μ is the convergence factor controlling the stability and rate of convergence of the algorithm. Fig. 7 shows the evolution of the adaptive filter coefficients of eighth order during the iterations. The step size μ(n) is modified for better convergence in the presence of noise and its equation is given by (Aboulnasr & Mayyas, 1997): where p(n) represents the autocorrelation error and it is calculated as: In the equation, ρ is the exponential weighting parameter. The ρ (0 < ρ < 1), λ (0 < λ < 1) and γ (0 < γ) are constants that control the convergence time and they are determined by statistical studies (Kwong & Johnston, 1992). The stability of the proposed algorithm The stability is a critical factor in proposed algorithm implementation, especially if the convergence factor (μ) is out of range associated. Due to these problems, a continuous monitoring of data window samples is performed, providing a self-tuning range of convergence. The behavior of μ is controlled by equation (14) (Wies et al., 2004): where N and M are the filter and window sizes, respectively. Fig. 8 shows the proposed algorithm flowchart with the stability control. The frequency estimation The frequency estimation was performed according to Begovic et al. (1993). To find the phase difference, the complex variable Γ was defined as: The relationship between Γ and the system frequency is obtained by equation expansion above. This expanding is shown by: where U max = 1, once the input signal is normalized, and f est is the estimated frequency. The frequency of the estimated signal (y(n)) was calculated in function of the phase difference between two consecutive samples, and the latter was provided by the equation below: where f s is the sampling frequency and ℜ() and ℑ() are the real and imaginary parts, respectively. The convergence process The stop rule adopted was the maximum number of iterations (1,000) or error smaller than 10 −5 . This error can be estimated by: where e relat is the relative error between samples, abs() is the absolute value, y(n) is the estimated value and u(n) is the desired value or input sample. The post-processing process of the output signal The output signal (estimated frequency) is additionally filtered by a second order Butterworth low pass digital filter with a cut-off frequency of 5Hz. This procedure reduces the oscillation present in the proposed method output, avoiding errors due to abrupt variations of the frequency. It is important to observe that the delay of the low pass filter does not influence the algorithm performance negatively as can be seen in the results. Fig. 9 shows the representation of the simulated electrical system, taking into account load switching and permanent faults in order to evaluate the frequency estimation technique proposed in this work. The electrical system consists of a 13.8 kV and 76 MVA (60Hz) synchronous generator, 13.8:138 kV /138:13.8 kV and 25 MVA three phase power transformers, transmission lines between 80 and 150 km in length and loads between 5 and 25 MVA with a 0.92 inductive power factor. Power transformers have a delta connection in the high voltage winding and a star connection in the low voltage winding. The power transformers were modeled using ATP software (saturable transformer component) considering their saturation curves as illustrated in Fig. 10. Tables 1 and 2 show the parameters used in order to simulate the power system components using ATP software. In Table 1, S is the total three-phase volt-ampere rating of the machine, N p is the number of poles which characterise the machine rotor, V L is the rated line-to-line voltage of the machine, f is the electrical frequency of generator, I FD is the field current, R a is the armature resistance, X l is the armature leakage reactance, X o is the zero-sequence reactance, X d is the direct-axis synchronous reactance, X q is the quadrature-axis synchronous reactance, X d is the direct-axis transient reactance, X d is the direct-axis subtransient reactance, X q is the quadrature-axis subtransient reactance, It is important to emphasise that the transmission line model used was JMARTI from ATP. This was because it is possible to have a variation of the line parameters in function of the frequency and consequently obtain a better representation of the system's behavior when facing disturbances resulting from unbalance between generation and load. It must also be emphasised that the synchronous generator was simulated with an automatic speed control for hydraulic systems (Boldea, 2006) and automatic voltage regulation (AVR) (Boldea, 2006;Lee, 1992;Mukherjee & Ghoshal, 2007), considering various electrical and mechanical parameters from the generator. Equation 19 shows the transfer function of the speed regulator used: GER where η(s) is the servomotor position, ΔF(s) is the frequency deviation, R is the steady-state speed droop, r is the transient speed droop, T g is the main gate servomotor time constant and T r is the reset time. Table 3 presents the parameters concerning the speed regulator. Table 3. Parameters concerning the speed regulator. Fig. 11 shows the block diagram of the excitation control system which was used. The basic function of the excitation control system automatically adjusts the magnitude of the DC field current of the synchronous generator to maintain the terminal voltage constant as the output varies according to the capacity of the generator (Kundur, 1994). Fig. 11. Block diagram of the excitation control system. The field voltage control can improve the transient stability of the power system after a major disturbance. However, the extent of the field voltage output is limited by the exciter's ceiling voltage, which is restricted by generator rotor insulation (Kundur, 1994;Leung et al., 2005). Test cases This section presents results of the proposed scheme. Although a great deal of data was used to test the proposed technique, only four cases of abnormal operation were carefully chosen to 374 Adaptive Filtering Applications www.intechopen.com illustrate the technique performance concerning the electrical system presented in Fig. 9. Each condition imposes a particular dynamic behavior in the power balance and, consequently, in the variation of the power system frequency. Measurements from a commercial relay (function 81) were obtained by using the simulated voltage signals from ATP in order to compare the results. Moreover, the actual frequency of the EPS was measured directly from the angular speed of the synchronous generator. It should be emphasized that the sample rates of 1, 920Hz and 1, 000Hz were used in the FEALMS software and a commercial relay (function 81), respectively. Due to a great influence from the adjustment of the filter parameters in the results, these parameters were selected according to Kwong & Johnston (1992), and they are: μ max = 0.18, μ min = 0.001, p inicial = 0, λ = 0.97, γ = 0.01 and ρ = 0.99. Based on Fig. 9, the simulated situations were: • a sudden connection of load blocks; • a permanent fault involving phase A and ground (AG) on the BGER busbar at 2s; • a sudden disconnection of TR1E and TR3E transformers at 1s; • a permanent fault at 50% of line 1; • the generator overexcitation; • the TR3E transformer energization with full load. Fig. 12(a) shows the estimation of the synchronous generator frequency using the FEALMS, the ATP software reference curve and the commercial frequency relay responses considering the connection of load blocks in the BGCH3 busbar. In the figure, a slight delay in the frequency estimation by the relay can be observed, when compared to the correct result given by the ATP software curve. In this situation, a very good precision concerning the FEALMS can be observed, even in critical points of the system's behavior. The error concerning the application of the proposed technique is also presented as illustrated in Fig. 12(b). A permanent fault involving phase A and ground (AG) on the BGER busbar at 2s Fig. 13(a) shows the estimation given by the proposed technique, the ATP reference curve, as well as the commercial frequency relay performances for an AG fault on the BGER busbar at 2s. Frequency (Hz) Time ( It can be seen that the commercial frequency relay tested loses the reference voltage, hindering the frequency estimation. It should be emphasized that even in this unfavorable condition, the FEALMS estimated the power system frequency satisfactorily. A sudden disconnection of TR1E and TR3E transformers at 1s Fig. 14(a) also illustrates the FEALMS responses, the ATP reference curve, as well as the commercial frequency relay for a disconnection of TR1E and TR3E transformers. The analysis of this condition is fundamental to test the algorithm's robustness facing practical field situations, taking into account the high level of distortion presented in the input signals. A permanent fault at 50% of line 1 In Fig. 15, a frequency variation and relative error of frequency estimation by the proposed technique are observed considering an AG fault at 50% of the transmission length of line 1. In this situation, the correspondent three-phase circuit breaker opened after 80ms from the fault inception. It should also be observed that the recovery of the machine synchronization was achieved in this situation. It can be seen from Fig. 17(b) that most of the estimation errors for FEALMS are below 0.05%. Conclusions This chapter presented an alternative method for the frequency estimation in electrical power systems using the modified LMS algorithm. The implementation of digital filtering and the αβ-Transform in the proposed technique made the simultaneous use of the three-phase power system voltages for the estimation purpose possible. In the complex LMS algorithm considered, the adjustment of the step size was used as being adaptive based on the estimations of the error. The simulations used for testing the proposed algorithm were obtained using ATP software. The adaptive filter theory applied to the digital protection was fast and reliable. Some points should be observed: 1. The FEALMS algorithm can be applied to various situations and voltage levels, as it is not influenced by the magnitude of the input waveforms; 2. Three-phase voltages were analysed contrary to a single phase (used in commercial relays), making the proposed algorithm more robust; 3. The technique is easy to implement and does not need adjustments and knowledge of further functions, as is the case of the commercial relay (function 81) which was used; 4. Applying FEALMS algorithm, the average error in all cases studied was 0.08%. It is also important to highlight the feasibility and computing efficiency of this method, make it suitable for commercial applications.
2016-01-15T18:20:01.362Z
2011-07-05T00:00:00.000
{ "year": 2011, "sha1": "c2fd1f2ce7fe972b3f86638ddfc96444a678809b", "oa_license": "CCBYNCSA", "oa_url": "https://www.intechopen.com/citation-pdf-url/16128", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "2645317503e79b897d26f38e6ca32e31354e79ca", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
254253556
pes2o/s2orc
v3-fos-license
#ItsNotInYourHead: A Social Media Campaign to Disseminate Information on Provoked Vestibulodynia Provoked Vestibulodynia (PVD) is a type of localized vulvodynia (or pain in the vulva). The estimated prevalence of this condition is about 12% of the general population and approximately 20% of women under the age of 19. Many women who live with PVD suffer in silence for years before receiving a diagnosis. Whereas cognitive behavioral therapy (CBT) was already known to be effective for managing symptoms of PVD, there has recently been a published head-to-head comparison of CBT versus mindfulness-based therapy for the primary outcome of pain intensity with penetration. The trial revealed that both treatments were effective and led to statistically and clinically meaningful improvements in sexual function, quality of life, and reduced genital pain, with improvements retained at both 6- and 12-month follow-ups. We then undertook an end-of-grant knowledge translation (KT) campaign focused on the use of social media to disseminate an infographic video depicting the findings. Social media was strategically chosen as the primary mode of dissemination for the video as it has broad reach of audience, the public can access information on social media for free, and it presented an opportunity to provide social support to the population of women with PVD who are characterized as suffering in silence by starting a sensitive and empowering dialogue on a public platform. In this paper, we summarize the social media reach of our campaign, describe how and why we partnered with social media influencers, and share lessons learned that might steer future KT efforts in this field. Introduction Provoked Vestibulodynia (PVD), characterized by provoked pain with touch to the vulvar and/or vaginal area, without obvious signs, affects up to 12% of women Pukall & Cahill, 2014;Reed et al., 2012) and is associated with a wide range of psychological, relational, and sexual difficulties (Basson, 2012;Desrochers, Bergeron, Landry, & Jodoin, 2008). Among sexually active young women under the age of 20, the prevalence of pain with intercourse for at least 6 months or more was found to be 20% (Landry & Bergeron, 2009). The economic burden of vulvodynia is $31-72 billion per year in the U.S. (Xie et al., 2012) including both direct costs (e.g., women with PVD often visit multiple healthcare providers and 40% of women with vulvar pain remain undiagnosed after seeking treatment) (Harlow & Stewart, 2003) and indirect costs (e.g., income loss due to time away from work). A more recent study estimated that the annual total direct costs, per patient with PVD, to the American healthcare system was over $117,000 per year, when pharmacy costs were included (Lua, Hollette, Parm, Allenback, & Dandolu, 2017). Evidence for Psychological Treatments of Provoked Vestibulodynia There is growing evidence for psychosocial-based approaches to pain management. Among these approaches, cognitive behavior therapy (CBT) is the most commonly studied psychological treatment for PVD. One randomized trial comparing CBT to surgery (vestibulectomy) and pelvic muscle biofeedback found all three treatments to be effective for reducing vulvar pain intensity (as measured by cotton swab by a gynecologist) 6 months after treatment, although the vestibulectomy group showed the greatest reduction in pain during the cotton swab test of allodynia with a 70% reduction in pain in the surgery group and 29% reduction in pain in the CBT group (Bergeron et al., 2001). At a 2.5-year follow-up, pain intensity during intercourse was equivalent in the CBT and vestibulectomy groups (Bergeron, Khalifé, Glazer, & Binik, 2008), and reductions in pain intensity and improvements in sexuality outcomes (e.g., a global sexuality score that includes desire, arousal, orgasm, frequency of sexual activities, and overall satisfaction) were maintained for all three treatment groups . A pilot randomized trial comparing CBT to pelvic floor physical therapy also found comparable improvements in both groups, with effects maintained at 6 months after treatment (Goldfinger, Pukall, Thibault-Gagnon, McLean, & Chamberlain, 2016). Specifically, 70% of those in the CBT group and 80% in the physical therapy group showed a moderately clinically important decrease in pain. Another randomized study comparing individually delivered CBT to supportive group therapy administered weekly over 10 weeks found CBT led to significantly greater reductions in non-sexual provoked pain and improvements in sexual functioning compared to the support group (Masheb, Kerns, Lozano, Minkin, & Richman, 2009). Specifically, 39% of the participants receiving CBT had at least a 33% reduction in their vulvar pain intensity. One randomized trial compared 10 sessions of group CBT to an application of a topical steroid (consisting of 1% hydrocortisone cream) for 13 weeks for the primary outcome of pain during intercourse (Bergeron, Khalifé, Dupuis, & McDuff, 2016). Whereas participants in both conditions improved, women in the CBT group reported greater reductions in pain with intercourse, greater improvements in sexual function at the 6 month post-treatment time point, greater improvements in pain catastrophizing, and greater treatment satisfaction, but similar pain self-efficacy as compared to women receiving the topical steroid . Specifically, 68.6% of those in the CBT group reported good improvement to complete relief of pain at 6 months following treatment. Taken together, CBT has been recommended with Level 2 evidence as an effective treatment for pain intensity as well as sexual function and other associated psychological symptoms for women with PVD (Goldstein et al., 2016). Though CBT is considered a second generation skills-oriented approach aimed at changing and challenging thoughts, newer, third wave approaches focus on cultivating the skill of acceptance. Mindfulness, a meditative practice defined as "non-judgmental, present-moment awareness" (Bishop et al., 2004) aims to increase awareness of, for example, painrelated thoughts and physical sensations with equanimity and without the intention of controlling or changing them. Stemming from the early work of Kabat-Zinn in the mid-1970s (Kabat-Zinn, 1982, 1990Kabat-Zinn, Lipworth, & Burney, 1985), this mindfulness-based approach has been adopted by the general chronic pain field as an effective treatment for a number of chronic pain conditions compared to a wait-list control, treatment as usual, or to psychoeducation (Hilton et al., 2017;Kerns, Sellinger, & Goodin, 2011). Mindfulness promotes a state of awareness in which thoughts are allowed to reside in consciousness without any emotional attachment or aversion to them. It has been described as "uncoupling" the physical sensation from the emotional and cognitive experience of pain (Kabat-Zinn, 1982). Paying attention to physical sensations is distinctly processed from an experience's emotional qualities, with the former processed in the inferior parietal and primary somatosensory cortices and the latter processed in the perigenual anterior cingulate and anterior midcingulate cortices (Kulkarni et al., 2005). Recently, CBT has been compared to an equal duration mindfulness-based cognitive therapy (MBCT) intervention in a head-to-head trial focused on women with PVD (Brotto et al., 2019). Treatment consisted of eight 2-h weekly group sessions led by professional facilitators (who were sexual medicine physicians, psychologists, and upper level trainees in clinical psychology) who had expertise in mindfulnessbased interventions, CBT, and managing PVD. Duration of sessions, assessments, and educational information about PVD were the same in both arms. The primary endpoint focused on vulvar pain intensity using a numeric rating scale (Farrar, Young, LaMoreaux, Werth, & Poole, 2001) and vulvo-vaginal pain assessed with a vulvalgesiometer (Pukall, Binik, & Khalifé, 2004;Pukall, Young, Roberts, Sutton, & Smith, 2007) designed to administer a fixed amount of pressure to the vulva. Additionally, several secondary endpoints focused on sexual functioning, sex-related distress, and various psychological outcomes used in studies of chronic pain. Both treatments led to similar significant improvements in ratings of provoked vulvar pain using the vulvalgesiometer; overall sexual function; pain catastrophizing; pain hypervigilance; and sex-related distress. Though the effect sizes for both MBCT and CBT were large for the outcome of self-reported pain with vaginal penetration, the effect was greater for MBCT compared to CBT, suggesting potentially different mechanisms underlying these two treatments (Brotto, Bergeron, Zdaniuk, & Basson, 2020). All effects were in the moderate-to-very strong clinically meaningful range when assessed both 2-4 weeks after treatment and at the 6-month and 12-month follow-up periods (Brotto et al., 2019(Brotto et al., , 2020. Disseminating the Evidence There is a gap in current practice and existing evidence when it comes to treating women with PVD. National surveys show that topical steroids and oral antidepressants are the most commonly used treatments by primary care physicians, yet scientific evidence does not find these treatments to be significantly more effective than placebo (Brown, Bachmann, Wan, & Foster, 2018;Foster et al., 2010). On the other hand, there is strong empirical evidence for two psychological approaches to managing PVD (CBT and mindfulness meditation; Brotto et al., 2019;Dunkley & Brotto, 2016;Goldstein et al., 2016). There is a need for women and their care providers to be informed of these evidence-based treatments so that women may receive care that leads to clinically meaningful and lasting improvements in their symptoms. Because PVD is associated with significant increases in depression and anxiety (Khandker, Brady, Stewart, and Harlow, 2014), and because ongoing and chronic mental health symptoms and stress complicate the management of women's genital pain (Bachmann, Brown, & Foster, 2014), addressing mental health has been identified as essential if healthcare providers are to effectively move the needle on treating PVD (Sadownik, 2014). Women, themselves, acknowledge that they want healthcare providers to discuss the role of psychological factors in perpetuating their PVD, and that this is distinct from receiving the message that the pain is "all in their heads" (Shallcross et al., 2019). In direct response to this state of affairs, we launched this knowledge translation (KT) project designed to disseminate evidence-based information directly to women with PVD in a social media campaign entitled #ItsNotInYourHead. KT is designed to address two well-known gaps in the translational continuum, which the Canadian Institutes of Health Research refers to as the "Valleys of Death." The first gap lies between basic science and clinical science, and the second gap between clinical science and clinical practice, and these gaps have contributed to the often-cited figure of 17 years before new scientific data are adopted into practice (Morris, Wooding, & Grant, 2011). Furthermore, 14% of clinical research never makes its way to impact practice (Balas & Boren, 2000). Importantly, such gaps directly impact the care that patients receive when seeking treatment, and over half of physicians report not having adequate information to guide their treatment decisions (Dawes & Sampson, 2003;Kiesler & Auerbach, 2006;McGlynn et al., 2003). KT, also known as dissemination, is a set of strategies designed to share scientific information with target audiences (Kirchner, Waltz, Powell, Smith, & Proctor, 2017) and is widely recognized by funding agencies as a critical aspect of research. Research shows that women with PVD frequently go online to learn about different treatment options and treatment centers given their view that their healthcare providers lack key information about PVD (Shallcross et al., 2019). The primary goals of this project were: (1) to develop a social media dissemination strategy and campaign and (2) to document reach by capturing metrics associated with various forms of social media. The long-term goal was to facilitate the uptake of scientific evidence by women (and other key stakeholders) who can directly utilize the new knowledge about PVD. Our goal was to maximize reach of #ItsNotIn YourHead using a variety of tactics/strategies in the hope that women living with chronic genital pain would have access to information that might lead to an earlier diagnosis of PVD and which could facilitate conversations with their healthcare providers about possible treatment options. Since this project focused on reach, the latter putative outcomes were not measured. Knowledge Translation Framework We were guided by the knowledge-to-action cycle framework (Straus, Tetroe, & Graham, 2009) which articulates the processes from knowledge creation to tailoring knowledge, to application of knowledge. There are two aspects of the cycle (Fig. 1): knowledge creation (represented by the middle funnel) and the action cycle (outer circle) which are seen as iterative and dynamic. This project focused on the centerpiece-knowledge creation, which includes knowledge inquiry (completion of primary research), synthesis (bringing different sources of research knowledge together), and production of tools. This project created knowledge toolkits and an infographic video. To facilitate our knowledge-toaction processes, we used the Knowledge Translation Planning Template (National Collaborating Centre for Methods and Tools, 2012) to develop our KT strategy. The checklist allowed our team to consider all stages of our knowledge translation strategy. We elected social media as our primary method of knowledge sharing given its exponential growth for communicating health-related topics to broad audiences (Hamm et al., 2013;Perrin, 2015). The template includes the following topics that should be considered in all KT projects: project partners (who are the partners on the team), degree of partner engagement (which aspect of the project will each partner participate in), partner roles (what does each partner bring to the project), KT expertise on the team (who holds which type of KT expertise on the team), knowledge users (which knowledge users or audiences will the KT activities target?), main messages (what messages are intended for the primary audiences?), KT goals (these should be specific to each knowledge user and audience), KT strategies (these should be informed by evidence of their effectiveness), KT process (will activities be integrated during the research or at the end?), impact and evaluation (where do you want to have an impact and how will that be evaluated?), resources (what outside supports are necessary for the KT activities), budget, and implementation (how will you implement the KT strategy). We chose the components that were most relevant to our current project and further elaborate on them below. Project Partners: Roles and Expertise The core team consisted of a clinician-scientist with expertise in PVD and training in KT; a knowledge user partner who was a patient and research participant in studies of PVD; a communications assistant who had expertise in social media analysis and metrics; and a digital health research manager, with expertise in KT and digital health technologies. Additional team members include a patient advisory group, who developed the infographic video used in dissemination; a media design company (The Thinking Box); an award-winning digital marketing agency (Ehm&Co) designed to further boost the profile of the campaign; and five social media influencers who had audiences that aligned with our campaign values and who were contracted to amplify our campaign messages. The core team remained involved from initial project design through analysis, whereas the other team members participated at key time points throughout the campaign. Campaign Main Messages The main messages we intended to disseminate through the campaign were: (1) chronic vulvar pain is common and you are not alone, and (2) there is evidence that psychological treatments can be effective in managing symptoms. The main KT goals were to generate awareness and interest in PVD and to impart knowledge. The target audience was women who may be experiencing chronic genital pain, regardless of whether they have been diagnosed with PVD or not. Our secondary audience members were healthcare practitioners, policymakers, partners, researchers, the media, and the general public. Because women's decisions about healthcare may be directly influenced by their partners' attitudes, by suggestions made by their healthcare providers, and by views of the Fig. 1 Knowledge-to-action (KTA) framework (Straus et al., 2009) general public, we believed it was important to also target these audiences in our dissemination activities. Strategies Employed Social media was chosen as the primary KT strategy based on our prediction that our reach would be greatly enhanced compared to using other face-to-face or written vehicles for translation of this knowledge. Social media is the set of tools and networking platforms that allow people to connect, communicate, and collaborate using web-based technology (Jue, Marr, & Kassotakis, 2009). The advantages of social media to disseminate scientific information are well-documented (Hemsley & Mason, 2013;Oakley & Spallek, 2012) and include the rapid dissemination of information, the broad reach of the audience, the ability to create a community around a topic of interest, the use of metrics for evaluation of knowledge dissemination, flexibility in how to deliver information, the much faster exchange of information than face-to-face methods, and the ability to provide social support, which is of pertinence to this population of women with PVD who are characterized as suffering in silence. Moreover, Canadians find it acceptable to receive health information via social media (Royal College of Physicians and Surgeons of Canada, 2014) and there is evidence that Twitter is widely used and acceptable as a vehicle for increasing knowledge and exchanging advice (Antheunis, Tates, & Nieboer, 2013). In terms of KT process, we elected to follow an end-of-grant KT framework given that the findings from Brotto et al. (2019) were the basis of our social media messages. We set up a Twitter account, a public Facebook Page, a private Facebook Group, and an Instagram account which would be used to disseminate the video, key messages, and other related content during the campaign period. The accounts were all branded with the same colors and images from the infographic video to solidify the #ItsNotInYourHead brand. Table 1 lists other strategies we used to increase engagement with our social channels and the content we promoted. Creation of the video took 6 months, followed by 2 months of meetings with the entire team prior to campaign launch. The campaign ran for 6 months from October 2017 to March 2018. Following this, 2 months were spent accumulating metrics from all sources, and a plain-language report was developed a month later. Infographic Video The main dissemination product was a 143-s infographic video we created with a patient advisory group, our knowledge user partner, and with the company, The Thinking Box. The video (#ItsNotInYourHead) depicts a woman suffering in silence with chronic genital pain until she is diagnosed with PVD and suddenly realizes that she is not alone. The video then summarizes the results of our study (Brotto et al., 2019), and the video also depicts effective pain management techniques using both mindfulness-based and cognitive behavioral therapy based skills, highlighting that both of these treatments are effective pain management techniques for PVD. The final frame of the video lists useful resources where women might want to learn more about PVD. As patient partners were fully engaged in the development of the video, they also named the campaign, #ItsNotInYour-Head, emphasizing that the essence of the experience that women often face on their journey with chronic genital pain Table 1 Strategies used throughout the #ItsNotInYourHead campaign 1. Created original content to promote the campaign messages using the script, GIF clips, and stills from the #ItsNotInYourHead video 2. Shared online media which featured Professor Lori Brotto discussing PVD and Mindfulness to promote the science supporting the campaign messages 3. Consulted a patient partner with lived experience of the condition on the campaign team who helped promote content and gave a credible voice to the campaign 4. Published 2-3 original tweets per week, 1 original Facebook post, and 1 Instagram post per week using the content management platform Hootsuite. We used images or graphics where possible to grab visual attention and boost post performance and used Hootsuite to monitor our hashtag, keywords, and several key accounts so we could join in and amplify online dialogues related to our campaign messages 5. Tapped into existing online communities that dealt with chronic pain, women's health issues, reproductive health issues, positive sex and leveraged the support of women's health influencers and relevant organizations with an established following of our target audiences 6. Hosted chats on our Twitter account with various groups to demystify some of the commons myths around PVD and shared evidence-based information regarding treatment of PVD 7. Wrote blog posts promoting the campaign and trial findings for various outlets we knew had a following of our target audiences 8. Aligned promotion with trending and viral hashtags, awareness days, or 'take action weeks' (e.g., #FactFriday, #MindfulnessMondays, World Compassion Day, Sexual Health Week, International Women's Day, and National Pain Week) 9. Developed an easily downloadable and user friendly social media toolkit which included template posts, graphics, and guidelines on how and when to use them on social media platforms 10. Retrieved weekly social metrics to analyze what content was performing well so we could strategically target future posts (for example, specific content that received high engagement, days and times of day with most engagements) is frustration, sadness, and helplessness. In the creation of our infographic video, we were mindful of speaking to the diversity of individuals who may experience PVD and this was reflected in the illustrations. Social Accounts Our primary outcome in this project was reach indicators, which can be defined as how many users are served campaign messaging on a given social platform or channel, and includes the accessibility of our video via social media based on our dissemination efforts. We used the following social accounts for dissemination: (1) Web: A webpage dedicated to the campaign was housed at www.whri.org. The page described how to use the campaign social media toolkit for dissemination and provided links to the video as well as our other social channels. (2) YouTube: The video was hosted on the Women's Health Research Institute YouTube channel. (3) Instagram: We created the account @PVD_Advocacy to share our campaign. (4) Facebook: We created a public facing page @PVDadvocacy to promote our video and share information. We also created a private Facebook Group where women with PVD built a community of support. (5) Twitter: The handle @PVD_Advocacy was created to connect with women who experience symptoms of PVD. In addition to targeted dissemination via the #ItsNotIn YourHead social media channels, we collaborated with an award-winning digital marketing agency Ehm&Co to further boost the profile of the campaign and its key messages to the Yummy Mummy Club (YMC) community. Ehm&Co is the company behind yummymummyclub.ca. The partnership capitalized on the community's monthly reach of over 5 million people. They created an integrated program for the #ItsNotInYourHead Campaign from January 2018 to March 2018 which included (1) a custom article shared through the YMC monthly newsletter and social media channels; (2) promotional posts about PVD and the campaign on YMC social channels; (3) a Twitter party (a sponsored live chat using the Twitter platform and hashtag (#) search feature to connect participants to an ultra-fast paced conversation stream on a specific topic); (4) a Facebook Live event; and (5) a Social Influencer Program. Individuals attending the Twitter Party are incentivized to join through an opportunity to win a monetary prize. Impact and Evaluation We tracked the success of/metrics for our posts across all platforms, which was found to generate an international reach. Reach was defined as the number of users reached; impressions was defined as the number of times a user is served a post. (In other words, reach could be 12 unique users, whereas impressions could be 24 if those 12 users each saw the same post twice.) We focused also on engagement, defined as the total number of times a user interacted with a post, including replies, follows, likes, links, cards, hashtags, embedded media, username, profile photograph, or post expansion. We also measured impressions, as indexed by the number of people who may have seen our content, regardless of whether it was clicked. All data presented are based on the campaign period of 6 months. Budget Funding for this project came from a Knowledge Translation REACH award from the Michael Smith Foundation for Health Research ($9000). Additionally, the services of Ehm&Co ($28,000) were covered from an Operating Grant by the Canadian Institutes of Health Research to Brotto. Results Over six months, our campaign reached a total of 45 countries (Fig. 2). Our webpage had a total of 180 unique page views, and our infographic video was viewed, on average, for 119 s (83% of video viewed). Direct views on YouTube were 785 for an average duration of 87 s (61% of video viewed) across 30 countries. All views of the video were in English, according to YouTube's build-in analytics dashboard. Moreover, 11.4% of the total views added English subtitles. On Instagram we had 1077 Impressions, 253 Likes, and gained 40 followers. On Facebook, we had 53 followers and the highest reach on a single post was 198. On Twitter, we had 108,029 Impressions, 2307 Engagements, 402 retweets, 414 Likes, and 1047 media views. Our campaign media partner, Ehm&Co, created a custom article about PVD and linked it to our campaign (Fig. 3) which had 1942 page views, 1161 Engagements, and an average viewing time of 204 s. Their social media posts (n = 29) generated 368,115 Impressions and led to 185 Engagements. They organized a Twitter Party consisting of a 1 h online chat with Brotto which generated 3400 tweets, 19,049,942 Impressions, 4873 Engagements, and included 101 participants. It also was on the top five trending in Canada (Fig. 4). Yummy Mummy Club also hosted a Facebook Live event during which Brotto answered questions live on video. This had a total of 30,900 views and 66 Engagements. Ehm&Co have a "social influencer program" which entailed engaging five influencers in the Yummy Mummy Club network who used key campaign messaging and imagery to share information about #ItsNotInYourHead and PVD with their audiences. A total of 30 posts were made leading to 1.5 million social Impressions and 3184 Engagements. In total, our partnership with Yummy Mummy Club led to 20.9 million Impressions. Following completion of the campaign, #ItsNotInYour-Head was named a finalist in the Canadian Online Publishing Awards for the category of best online campaign. Discussion The goal of this project was to carry out a knowledge translation campaign designed to share information and raise awareness about Provoked Vestibulodynia to our primary target audience of women. By bringing together patients, clinicians, researchers, social media experts, digital design experts, and influencers, our campaign was designed to address a significant knowledge-to-action gap that has been well described by women with PVD (Shallcross et al., 2018(Shallcross et al., , 2019. The team was guided by the knowledge-to-action (KTA) framework (Straus et al., 2009) and used the Knowledge Translation Planning Template (National Collaborating Centre for Methods and Tools, 2012) to develop our KT strategy. Throughout the 6-month campaign, our team's communications assistant tracked metrics and reported these outcomes to the larger team at biweekly intervals. Overall, we found that using all social outlets and all partners, our campaign reached 45 countries and led to over 21 million Impressions. As one measure of impact, we can conclude that our campaign reached its goal of sharing information about PVD. Moreover, the development of our infographic video, which was patient-partner led, was a key aspect of our KT plan as it combined evidence-based facts about PVD while also sharing the findings from a recent published clinical trial of psychological treatment for PVD (Brotto et al., 2019). We observed that viewers watched most of the infographic video (up to 83%), suggesting that this vehicle may be an effective way of translating scientific findings about PVD into an accessible format for women. In 2 months following the end of the campaign, the team met to brainstorm, discuss, and then narrow down the factors that contributed to the success of our campaign suggests that three ingredients were key: having a dedicated campaign team, having a patient partner, and our media partnership with Ehm&Co. The campaign team consisted of researchers and clinicians with expertise in PVD, knowledge translation experts, a communications assistant who had expertise in social media analysis and metrics, a digital health research manager, and women with lived experience of PVD-all of whom were passionate about the #ItsNotInYourHead cause and message. Our patient partner was also a member of the investigator team since the project's inception and this was seen as critical to the campaign's success. Having the unique experiences of living with PVD, receiving treatment, as well as struggling to obtain evidence-based information about PVD through various means at pre-diagnosis meant we had the lens of our main target audience guiding us throughout the campaign as well as ensuring that the language we used throughout all of our posts was aligned with women's experiences. Incorporating the patient voice throughout campaign activities added value in engaging women and disseminating the information in an accessible and relatable way. Other researchers investigating the experiences of women with PVD also advocate for patient engagement throughout the research development process (Shallcross et al., 2019), and we would advocate that this practice be standard among research studies designed to capture and reflect the lived experiences of women, particularly those with PVD. This has been described as a "paradigm shift" in health research where it has been concluded that "evidence-based medicine" is simply not possible without patient engagement (Sacristán, 2013). Partnering with a digital marketing agency meant the campaign was amplified in a much more rapid manner and received extensive online exposure to a variety of audiences that may not have been reached through our own efforts. Ehm&Co's editorial teams used their expertise in storytelling to translate the scientific findings in a way that resonated with a broad audience. This partnership also meant we added new online marketing tactics to our digital marketing tool box. Reflections shared by Ehm&Co suggested that this campaign, and this subject area, resonated with their community in a very special way, and they reported that their community wanted to learn more about PVD. Overall, we observed that among various social media strategies used, we generated the most reach and impressions with our Facebook Live event and particularly our Twitter Party, which trended on Twitter Canada (Fig. 4). Given that many women with PVD report suffering in silence, experiencing difficulty in obtaining evidence-based information about PVD, and being dissatisfied with their interactions with healthcare (Sadownik, 2014;Shallcross et al., 2018Shallcross et al., , 2019, the use of social media to share accurate information about PVD was seen to fill this gap. For many of our viewers, they expressed that this was the first time they had received evidence-based information about PVD (Fig. 5). Unfortunately, we did not assess women's retention of this information, or if this led to any behavior change such as seeking a new healthcare provider, or making suggestions to their own healthcare providers about the availability of certain treatments that were highlighted during our campaign. Knowledge translation emerged as a potential solution to bridge the known 17 year gap between science and practice (Morris et al., 2011). Since viewers watched most of our infographic video, it may be that healthcare providers can use this video as a means of providing women seeking their care with basic, standard information about PVD, and about the efficacy of psychological treatments. As a cost-effective means of sharing information about PVD, academic health center-approved social media accounts might be used to share evidence-based information from credible sources to women waiting to see healthcare providers. Furthermore, a professionally moderated social media account might be considered as a way of disseminating knowledge about PVD based on the empirical literature, and our findings suggest that the impact of its reach could be significant. Furthermore, infographic videos might be a useful addition to the set of educational materials held by primary care doctors since they relay up-to-date and evidence-based educational information in a standard way to all patients. Our infographic video focused mostly on defining the symptoms of PVD and illustrating the process of obtaining a diagnosis. It then shifted to focus on the main outcomes of a large clinical trial. Women with PVD perceive their primary care doctors to lack basic information about PVD (Shallcross et al., 2019) and report this to be a barrier in their journey to wellness. Given that junior doctors, in particular, have been found to lack awareness and understanding of PVD due to lack of training (Toeima & Nieto, 2011), it may be that sharing the infographic video to their patients might offset some of this information gap. It is also the case that future KT efforts might specifically target (junior) primary care doctors who are likely the gate of entry into treatment for women with PVD. Rurality is another factor that can directly impede access to health care (Humphreys, 2002), and there is evidence that women who live in rural and remote areas may not be receiving appropriate diagnostic and treatment information for PVD (Cox & Neville, 2012). Much is known about the predictors of sustainable e-health technologies in rural settings that help to bridge access to care issues (Hage, Roo, van Offenbeek, & Boonstra, 2013), and it may be the case that social media campaigns such as #ItsNotInYourHead may also be particularly useful for women with PVD living in rural and remote communities. Future studies should focus on the usefulness and reach of similar educational campaigns, specifically for women living in rural areas. One limitation of our campaign is that although we could track geographic reach, we could not identify pertinent personal characteristics of those who engaged. For example, we do not know whether we reached our target audience of women, and how many of them had a diagnosis of PVD. We also could not measure whether viewers understood the information in our infographic video or whether the information shared was retained. Moreover, our campaign focused on raising awareness, but the potential impact on behavior remains unknown and particularly whether the information led to women who experience symptoms of PVD to receive a diagnosis more swiftly, or whether those with PVD were able to obtain evidence-based treatments more quickly. Behavior change theory posits that changing behavior is a complex process, whereby actual change in behavior may occur much later than informational and motivational changes (Prochaska & DiClemente, 1986). It remains a challenge for future social media campaigns to explore methods of extracting information about our viewers in order to assess whether strategies to reach the target audience were successful. Moreover, future projects need to incorporate methods of measuring behavior change after the target audience has received information. Finally, budget may be a barrier to other knowledge translation campaigns particularly if influencers must be compensated. Guided by the Knowledge Translation Planning Template (National Collaborating Centre for Methods and Tools, 2012), we selected reach indicators as our metric of evaluating our KT goals. Other indicators of impact that might be used in a future KT campaign associated with sharing knowledge about PVD could include use indicators, practice change indicators, knowledge change, and attitude change. For example, a measure of use might be the number of PVD healthcare providers who have used the knowledge to make changes in their practice, including adding new educational information about PVD. A measure of practice change could be the number of units or clinics who intend to make changes as a result of the information learned. Knowledge change can be measured quantitatively and qualitatively and be assessed among women, healthcare providers, as well as the general public, for example. Attitude change might be captured by the number of women who no longer experience dismissive statements by healthcare providers. Overall, we determined that this end-of-grant social media campaign designed to share information and raise awareness about PVD was successful. In addition to sharing general information about the diagnosis, it was used as an opportunity to share the findings from recent publications on psychological treatments for PVD (Brotto et al., 2019(Brotto et al., , 2020. Given the limited body of knowledge translation science in the field of women's sexual health, this is a novel contribution to this body of literature, and we encourage the field to adopt this as a strategy for knowledge dissemination.
2022-12-06T14:52:03.630Z
2020-06-02T00:00:00.000
{ "year": 2020, "sha1": "bbfbdd9501c38bbe08fee9b0f8c9b834cde65113", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10508-020-01731-w.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "bbfbdd9501c38bbe08fee9b0f8c9b834cde65113", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [] }
1948256
pes2o/s2orc
v3-fos-license
Free-Standing Two-Dimensional Single-Crystalline InSb Nanosheets Growth of high-quality single-crystalline InSb layers remains challenging in material science. Such layered InSb materials are highly desired for searching for and manipulation of Majorana fermions in solid state, a fundamental research task in physics today, and for development of novel high-speed nanoelectronic and infrared optoelectronic devices. Here we report on a new route towards growth of single-crystalline, layered InSb materials. We demonstrate the successful growth of free-standing, two-dimensional InSb nanosheets on one-dimensional InAs nanowires by molecular-beam epitaxy. The grown InSb nanosheets are pure zinc-blende single crystals. The length and width of the InSb nanosheets are up to several micrometers and the thickness is down to ~10 nm. The InSb nanosheets show a clear ambipolar behavior and a high electron mobility. Our work will open up new technology routes towards the development of InSb-based devices for applications in nanoelectronics, optoelectronics and quantum electronics, and for study of fundamental physical phenomena. Over the past several decades, the inherent scaling limitations of Si electron devices have fuelled the exploration of alternative semiconductors, with high carrier mobility, to further enhance device performance [1][2][3] . In particular, high mobility III-V compound semiconductors have been actively studied 4,5 . As a technologically important III-V semiconductor, InSb is the most desired material system for applications in high-speed, low-power electronics and infrared optoelectronics owing to its highest electron mobility and narrowest bandgap among all the III-V semiconductors. Recently, epitaxially grown InSb nanostructures have been widely anticipated to have potential applications in spintronics, topological quantum computing, and detection and manipulation of Majorana fermions, due to small effective mass, strong spin-orbit interaction and giant g factor in InSb [6][7][8][9][10][11][12][13][14][15][16][17][18] . All these applications require a high degree of InSb growth control on its morphology and especially crystal quality 19,20 . Unfortunately, due to the intrinsic largest lattice parameter of InSb among all the III-V semiconductors, epitaxial growth of InSb layers faces an inevitable difficulty in finding a lattice-matched substrate. Conventionally, buffer layers with graded or abrupt composition profile are deposited on lattice mismatched substrates to obtain a layer with a required value of lattice constant 21,22 . Nevertheless, even when the sophisticated buffer-layer engineering is used, the density of dislocations threading to the surface of the buffer from its interface with a lattice mismatched substrate is often too high to grow a high crystal quality InSb layer for fabrication of high-performance nanoelectronics and quantum devices and for study of novel physical phenomena. Here, we report on the successful growth of novel free-standing high-quality two-dimensional (2D) InSb nanosheets by molecular-beam epitaxy (MBE). A new route of growing high material quality layered InSb structures is discovered, in which free-standing InSb nanosheets are epitaxially grown on InAs nanowire stems and thus the process is independent of buffer-layer engineering. The morphology and size of free-standing InSb nanosheets can be controlled in the approach by tailoring the Sb/In beam equivalent pressure (BEP) ratio and InSb growth time. We demonstrate the growth of free-standing InSb nanosheets with the length and width up to several micrometers and the thickness down to ~10 nm. High-resolution transmission electron microscope (TEM) images show that the grown InSb nanosheets are pure zinc-blende (ZB) single crystals and have excellent epitaxial relationships with the InAs nanowire stems. The formation of the InSb nanosheets is attributed to a combination of vapor-liquid-solid (VLS) and anisotropic lateral growth. Electrical measurements show that the grown InSb nanosheets exhibit an ambipolar behavior and a high electron mobility. These novel, high material quality, free-standing InSb single-crystalline nanosheets have the great potential not only for applications in high-speed electronics and infrared optoelectronics, but also for realization of novel quantum devices for the studies of fundamental physics. The 2D InSb nanosheets were grown by MBE on free-standing InAs nanowire stems which were first grown on Si (111) substrates using Ag as catalyst in the MBE chamber 23 . Figure 1a shows the schematics for the growth process. We found that the morphology of InSb strongly depends on the Sb/In BEP ratio, and the InSb nanosheets can be realized by tailoring the Sb/In BEP ratio (Supplementary Section S1). For the sample grown with low Sb/In BEP ratio of 1~20, InSb and InAs formed core-shell or axial heterostructure nanowires (Supplementary Figs. S1 and S2). Further increasing the Sb/In BEP ratio, the resulting InSb nanowires have diameters obviously larger than that of the InAs segment (Detailed TEM investigation of a dozen of such nanowires reveals a diameter increase from 130% to 589%, see Supplementary Fig. S3 and Table S1). By increasing the Sb/In BEP ratio to the range of 27-80, new geometrically structured materials with each consisting of a 2D InSb nanosheet and a 1D InAs nanowire stem were obtained (Supplementary Figs. S1 and S4). Figures 1b to 1g show the top-view magnified scanning electron microscope (SEM) images of InSb nanosheets grown with an Sb/In BEP ratio of 80. As can be seen, the grown InSb nanosheets have parallelogram shapes. The thicknesses of the InSb nanosheets show significant variation ( Supplementary Fig. S5), as roughly measured from SEM images, from ~67 nm (Fig. 1b) To examine the structural characteristics, crystalline quality and the chemical composition of the grown InSb nanosheets, TEM and energy dispersive x-ray spectroscopy (EDS) measurements were performed. Figure 2a is a bright-field TEM image of a typical InSb nanosheet grown on an InAs nanowire at an Sb/In BEP ratio of 27. The InAs stem is 21 nm in diameter and 725 nm in length, while the InSb segment has a parallelogram shape with side lengths of 380 nm and 508 nm. It is found that the 2D InSb nanosheets can be successfully grown not only on 1D WZ crystalline InAs nanowires ( Supplementary Fig. S6) but also on 1D InAs nanowires with ZB phase, as shown in Fig. 2b. High-resolution TEM images of the side sections (Figs. 2c, 2f and 2h), the corner sections (Figs. 2d and 2g) and the section near the tip (Fig. 2e) of the InSb nanosheet and the associated Fourier transform (Fig. 2i) illustrate that the InSb nanosheet has a perfect ZB crystal structure, free from stacking faults or WZ regions. Although the stacking faults have been observed in Ag-catalyzed and self-seeded InSb nanowires by other groups 24,25 , detailed TEM observations of our grown InSb nanosheets with different shapes and sizes all reveal that the InSb nanosheets are fully single-crystalline, completely free from stacking faults and twinning defects (Supplementary Figs. S7 and S8). As observed in InSb nanocrystals grown with a high Sb/In BEP ratio 26 , the side facets with low surface energy such as {111} and {011} can be clearly seen in our InSb nanosheets (Fig. 2d, Supplementary Fig. S7). High-angle annular dark-field scanning TEM (HAADF-STEM) and corresponding EDS line profiles ( Supplementary Fig. S9) show that the InAs/InSb heterostructures start as InAs and then change to InSb. EDS elemental mappings shown in Figs. 2j to 2m confirm that sharp interfaces formed between the InAs nanowires and the InSb nanosheets as reported in InAs/InSb heterostructure nanowires 27-30 . The remaining spherical catalyst particle on the top of InSb nanosheets is found to be composed of Ag, In and Sb (Fig. 2n). EDS line point analysis indicates that the InSb nanosheet contains In and Sb with an atomic ratio of ~1:1 and the contents of Ag, In and Sb in the seed particles after the growth of the InSb nanosheets and of the InSb nanowires are similar (Supplementary Table S2 Supplementary Fig. S1), no apparent difference in morphology is observed in these InSb nanosheets, except for their smaller sizes. With increasing the InSb growth time to 120 min and to 160 min (Figs. 3e to 3l) while keeping other growth parameters unchanged, the obtained InSb nanosheets are still in planar shapes-these InSb nanosheets can be grown to micrometers in length and width, while still be kept in a thin thickness. As to the growth mechanism of the InSb nanosheets, it is unlikely that the growth is governed solely by the traditional VLS 31,32 or vapor-solid (VS) process 33 . On the contrary, we consider that the growth of the InSb nanosheets is dominated by the combination of VLS (vertical growth) and VS (lateral growth) processes. The spherical Ag-In-Sb alloy particles on the top of InSb nanowires and InSb nanosheets have the same crystal structure and the similar compositions (Supplementary Fig. S10 and Table S2), indicating that the VLS mechanism for Ag-seed InSb nanowire growth 24 exists in the growth of the InSb nanosheets. Meanwhile, we find that the lateral growth observed often in the growth of antimonide nanowires 27-30 is incorporated into our InSb nonsheet growth, since the width of the nanosheets is growth time dependent (Figs. 3a to 3l). However, the lateral growth for the nanosheets is quite different from that for nanowires. The InSb nanowires share a rotationally symmetric, laterally overgrown shell 34 , while the InSb nanosheets show an anisotropic lateral crystal growth. One possible combined VLS and anisotropic lateral growth process of the InSb nanosheets is given in Supplementary Section S6. Although the exact reason for the anisotropic lateral growth of the InSb nanosheets remains to be determined, we believe that the mechanism of combination of the VLS and the anisotropic lateral growth could be used to fabricate other high-quality 2D antimonide nanostructures. Electronic properties of the grown InSb nanosheets were characterized by electrical measurements. Figure 4a is a tilted-view sketch of the device structure used in the characterization (lower panel) and a corresponding atomic force microscopy image of a typical fabricated device (upper panel). In the device, an InSb nanosheet was contacted by Ti/Au electrodes in a Hall-bar configuration and the carrier density in the nanosheet was modulated by a global back gate. Details about the device fabrication and measurement scheme can be found in Methods and Supplementary Section S7. First, by voltage-biasing the source (S) and drain (D) contacts as shown in Fig. 4a, the 2-probe conductance G, which takes the form G = I ds /V ds , was measured as a function of gate voltage V gs at different temperatures (Fig. 4b, main panel, Device 1). When lowering temperature, several characteristic features were observed. (1) The off state G shows a monotonic decrease for V gs in the region of -3 V to 0 V, while the on state G shows a monotonic increase for V gs in the region of 8 V to 10 V. (2) The G-V gs curve becomes steeper in the linear region, indicating an increase of the peak transconductance g m , where g m = max{dG/dV gs }. (3) A clear ambipolar transport characteristic is seen at T = 60 mK, with the conductance on the hole side 1-2 orders of magnitude smaller than the electron side ( Fig. 4b, inset). At this temperature, G can be gate-tuned from 6G 0 at on state down to 0.008G 0 at off state (G 0 = 2e 2 /h, e is the elementary charge, and h is the Planck constant). Moreover, the G-V gs curves are well reproduced for the upward (blue) and downward (red) back-gate-voltage sweep directions, indicating a good surface quality of our InSb nanosheets, free from major influence of interfacial charge traps in spite of its large surface area. Figures 4c and 4d show the I ds -V ds plots at various gate voltages in an n-and a p-type region, respectively. The linear I ds -V ds curves obtained in the n-type region indicate an ohmic behavior in the electron injection and the absence of Schottky contact barrier. The nonlinear I ds -V ds curves observed in the p-type region indicate, on the contrary, the presence of injection barrier for holes. To obtain a reliable field-effect mobility, especially at low temperatures, one needs to take into account the contact resistance. Here we adopt a pinch-off trace fitting method 19,29,35 . Yet, a major drawback remains in the approach due to the uncertainty in obtaining the gate-to-nanosheet capacitance C g . To circumvent it, the Hall-bar device structure was employed to experimentally extract C g , although C g could be estimated from the geometric size of the device. Figure 5 summarizes the low-field Hall measurement data obtained from a second device (Device 2). The Hall resistance, R xy , is gate-tunable and approaches ~2 kΩ at B  1 T for the low V gs region (Fig. 5a, main panel). A linear fit to the R xy -B curve yields the Hall coefficient (Fig. 5a, inset). The carrier density n is then extracted from the gate-dependent Hall coefficient and is found to increase linearly from ~2×10 11 cm -2 to ~1.6×10 12 cm -2 with increasing V gs (Fig. 5b, main panel). Then, a gate-to-nanosheet capacitance C g = 832 μF is determined from the fitting slope of the n-V gs curve, close to the value of ~800 μF estimated based on the geometry of the device. This measured value is used to fit the 2-probe G-V gs curve to evaluate the field-effect mobility of Device 2 (Fig. 5b, inset). The analysis yields an electron mobility of ~15,000 cm 2 V -1 s -1 for this device. An electron mobility of ~18,500 cm 2 V -1 s -1 is obtained for Device 1 by the same kind of analysis (Supplementary Section S8). In conclusion, we demonstrate a new growth route of high-quality 2D InSb layers by MBE. These InSb layers are free-standing 2D InSb nanosheets grown on 1D InAs nanowires, which is independent of conventional buffer-layer engineering. The morphology and size of InSb nanosheets can be controlled by tailoring the Sb/In BEP ratio and growth time. The length and width of the grown InSb nanosheets can be up to several micrometers and the thickness can be down to ~10 nm. The InSb nanosheets are pure ZB single crystals. The electrical measurements show that these InSb nanosheets exhibit a high electron mobility and an ambipolar behavior. Our work opens a conceptually new approach to obtaining high-quality 2D narrow bandgap semiconductor nanostructures, and will speed up the applications of InSb nanostructures in nanoelectronics, optoelectronics and quantum electronics and in the development of topological quantum computation technologies. Methods The InAs/InSb nanostructures were grown in a solid source molecular-beam epitaxy (MBE, VG80) system. Commercial p-type Si (111) wafers were used as the substrates. Before loading the Si substrates into the MBE chamber, they were immersed in a diluted HF (2%) solution for 1 min to remove the surface contamination and native oxide. After cleaning, a Ag layer of 2 Å nominal thickness was deposited on the substrate in the MBE growth chamber at room temperature and then annealed in situ at 650 0 C for 20 min to generate Ag nano-particles. InAs nanowires were grown for 20 min at a temperature of 505 0 C with an As/In beam equivalent pressure (BEP) ratio of 30. Then the group-V source was abruptly switched from As to Sb without any variation of substrate temperature. All the InSb segments (if no specific description) were grown for 80 min at different Sb/In BEP ratios by increasing the Sb flux while keeping the In flux constant. The morphologies of the samples were observed by scanning electron microscope with a Nova NanoSEM 650, operated at 20 kV. Structural characterization was performed using transmission electron microscope (TEM) and samples were removed from the growth substrate via sonication in ethanol and then drop-cast onto a holey carbon film supported by a copper grid. High-resolution TEM images and energy-dispersive x-ray spectrometer (EDS) spectra (including the EDS elemental mapping and line scans shown in Fig. 2) were taken with the JEM-ARM200F, operated at 200 kV. Other high resolution TEM images shown in Figs. S2, S3, S6, S8 and S10 were collected using a JEOL-2011, operated at 200 kV. Selected area electron diffraction patterns in Fig. S7 were acquired using a TECNAI G 2 20, operated at 200 kV. The chemical composition shown in Fig. S9 was evaluated by scanning TEM, using an FEI Titan, operated at 300 kV and in both the line scan and point analysis modes. Atomic force microscopy measurements were carried out with a Digital Instruments Nanoscope IIIA using silicon tips with a typical resonance frequency of 300 kHz. For device fabrication, the as-grown InSb nanosheets were first mechanically transferred to a degenerately n-doped Si substrate covered by 105 nm SiO 2 layer, which serves as a global back gate. Then selected nanosheets were optically positioned relative to predefined alignment marks. Finally, the contact electrodes were patterned onto the sample using standard electron-beam lithography process. Prior to electron-beam evaporating a layer of 10/100 nm Ti/Au metal film, the sample was chemically etched for 2 min in a DI water-diluted (NH 4 ) 2 S x solution to remove the surface oxide layer at the contact area. All electrical measurements in this work were performed in a 3 He/ 4 He dilution fridge with a base temperature of 8 mK (Oxford Triton 200). In the 2-probe measurements, only the source/drain (S/D) contacts were dc voltage-biased, and the other four contacts were left floated. In the Hall-bar measurements, the S/D contacts were used to inject a constant dc current bias of ~50 nA through the sample, and the Hall voltage V xy was amplified and recorded simultaneously with respect to the sweep of the magnetic field applied perpendicularly to the plane of the sample/substrate. In the 2-probe G-V gs data, a circuit resistance of ~21.56 kΩ (including RC filters and serial resistors) has been subtracted. Supplementary Materials Figs. S1 to S15 Tables S1 to S3 References Competing financial Interests The authors declare that they have no competing financial interests. . Here the right axis shows the transverse resistance R xy of the form R xy = V H /I. Note that larger Hall resistance values are found for lower gate voltages (lower carrier densities). The inset shows a linear fit (red) to the measured V H -B trace (blue), from which the Hall coefficient R H can be determined. b, Hall coefficient R H obtained as a function of V gs (left axis, circle) at T = 60 mK. From R H , we can extract the sheet carrier density n as a function of V gs (right axis, square). A linear fit to the n-V gs data yields dn/dV gs = 2.1×10 11 cm -2 V -1 and therefore a gate-to-nanosheet capacitance C g = 832 μF. Using this C g value, we can fit the measured G-V gs trace of the device (inset of Fig. 5b) and extract a field-effect mobility of ~15,000 cm 2 V -1 s -1 . S8.2 Temperature-dependent field-effect motilities of Dev-1 and Dev-2 S1 Morphology controlling of the InSb nanostructures We find that the morphology of the InSb nanostructures strongly depends on the Sb/In beam equivalent pressure (BEP) ratio, and the InSb nanosheets can be realized by tailoring the Sb/In BEP ratio. The scanning electron microscope (SEM) images in Fig. S1 show the 20°-tilted view of the InAs/InSb heterostructures grown at different Sb/In BEP ratios. For the sample with an Sb/In BEP ratio of 1 (Fig. S1(a)), the nanowires exhibit tapered morphologies with short lengths since the axial growth rate of InSb is limited by the low Sb flux and an InSb shell forms around the top of the InAs nanowire stem, instead of an axial heterostructure as reported in other antimonide nanowires 1 . It is noteworthy that this low Sb/In BEP ratio is very close to typical Sb/In BEP ratio for the molecular-beam epitaxy (MBE) growth of InSb planar epitaxial layers 2 . Therefore, a homogeneous InSb layer can be observed on the substrate surface. At a slightly higher Sb/In BEP ratio of 20, InAs/InSb axial heterostructure nanowires are obtained and a typical nanowire is shown in Fig. S1(b). As can be seen, both the InAs and the InSb sections appear fully un-tapered. The diameter of InSb nanowire is small (~28 nm, Fig. S2), indicating the absence of obvious lateral growth. However, further increasing the Sb/In BEP ratio up to 27, it is clear that the resulting InSb nanowires have diameters obviously larger than that of the InAs segment (Fig. S1(c)). Detailed transmission electron microscope (TEM) observation of a dozen such nanowires revealed a diameter increase from 130% to 589% (Table S1 and Fig. S3), which is much larger than the diameter variation of the InAs/InSb heterostructure nanowires grown by metal-organic vapor phase epitaxy (MOVPE) and chemical beam epitaxy (CBE) 3,4 . Interestingly, at this Sb/In BEP ratio we observed a new geometrical structure that consists of two dimensional (2D) InSb nanosheets and one dimensional (1D) InAs nanowires. InSb nanosheets in the shapes of parallelogram, pentagon and hexagon are observed and their dimensions will be presented later. For even larger Sb/In BEP ratios (50-80, Figs. S1(d) and S1(e)), the primary morphologies of the InAs/InSb heterostructures are nanowire-nanosheet. It is also worth noting that the surface of the Si substrate is covered by many irregular InSb islands when the Sb/In BEP ratios is larger than 1. Moreover, the density of the parasitic InSb islands on the substrate surface increases with increasing Sb/In BEP ratio. When the Sb/In BEP ratio is above 120 (Fig. S1(f)), no InSb nanowires or nanosheets were observed since the InAs nanowires were covered by the parasitic InSb islands. (Fig. S2(b)) indicates a diameter of 28 nm. Although very thin InSb nanowires down to 5 nm have been reported 5 , to our knowledge, this is the first observation of ultrathin InSb nanowire with diameter down to 30 nm in a heterostructure with an InAs nanowire. Although the diameter of the InSb is larger than that of the InAs segment (diameter is about 18 nm), the radial growth of the InSb nanowires is not so obvious compared with the heterostructures grown with higher Sb/In BEP ratios (to be shown later) and high aspect ratios up to 89 (2.5 μm in length) can be obtained. It is worth noting that the lateral growth on the InSb nanowire along the 2 11  and 01 1  directions can be clearly observed. Table S1 lists the diameters of the InAs, InSb and seed particle for 6 representative heterostructure nanowires grown with an Sb/In BEP ratio of 27. In contrast to the small diameter variation of InAs/InSb heterostructure nanowires grown by MOVPE and CBE 3,4 , here a diameter increase from 130% to 589% occurs. Based on previous works on InAs/InSb and GaAs/GaSb heterostructure nanowires, the following factors have been considered in evaluating the mechanism for the diameter increase. It is believed that the change in particle volume due to the uptake of group-III atoms is the main reason for the diameter increase 3,4,6-8 . A change of particle wetting angle/aspect ratio could also explain part of the diameter increase, but is believed to be only a minor effect 7 . Additionally, the effect of the rotation of the nanowire sidewalls was also considered 4 . We argue, however, that in our system the above reasons cannot be the critical factors for the diameter expansion since the particle volume is relative small, as shown in Table S1. We believe that the very large InSb segment diameter is mostly caused by non-seeded lateral growth on the side walls of the nanowires. Table S1. List of parameters for six InAs/InSb axial heterostructure nanowires (grown with an Sb/In BEP ratio of 27) including diameters of the InAs and InSb sections and the catalyst particle, the percentage increase of the InSb and catalyst particle diameters and the lengths of InAs and InSb. S3.1 TEM images of an InSb nanosheet grown on a WZ InAs nanowire We find that the epitaxial growth of InSb nanosheets on WZ InAs nanowires can also be realized. Figure S6 shows TEM images of an InSb nanosheet grown on a WZ InAs nanowire. Detailed high-resolution TEM images (Figs. S6(b) to S6(f)) indicate that the InSb nanosheet has a ZB crystal structure and no stacking faults or twinning defects are found in the corner and other sections of the nanosheet. The InAs nanowire has a pure WZ crystal structure with a diameter about 15 nm (Fig. S6(g)). Clearly, an atomically sharp structure interface (from WZ to ZB) can be observed at the InAs/InSb interface section (Fig. S6(c)). S3.2 Selective area electron diffraction patterns of an InSb nanosheet To further examine the crystal quality of the InSb nanosheets, selective area electron diffraction pattern (SAED) were performed on different areas of the nanosheets. Figure S7 S3.3 TEM images of a large size InSb nanosheet To obtain the more detailed structure of the InSb nanosheets, high-resolution TEM investigations of several InSb nanosheets with large size were conducted and one set of results are shown in Fig. S8. Figure S8 (Table S2). S4 Chemical composition of the InSb nanosheets A small amount of As (less than 3%) was detected inside the InSb near the InAs/InSb interface by quantitative EDS point analysis and line scan (Figs. S9(c) and S9(d)). We note that in earlier studies 3,7-8 , a small background of As was also detected. The As concentration is somewhat higher at the beginning of the InSb segment due to uptake of As from the stem but decrease rapidly in the InSb segment away from the heterojunction. The remaining particles for InSb nanowires and nanosheets are found to be composed of Ag, In and Sb, confirming the activity of Ag-In-Sb ternary alloys in the catalyzed growth 5,9 . High-resolution TEM images of the InSb/seed-particle region taken from an InSb nanowire and an InSb nanosheet, respectively. Insets: the corresponding FFTs of the images. S5 Seed-particles information of the InSb nanostructures The seed-particle crystal structure of the InSb nanowire and nanosheet has been compared. As shown in Figs. S10(a) and S10(b), high-resolution TEM studies indicate that the spherical catalyst particles for the InSb nanowire and nanosheet are both single-crystalline with the same hexagon crystal structures, which is different from Au-catalyzed InAs/InSb heterostructure nanowires 3,8 and InSb nanowires 10 with the ZB structure of the particles. As mentioned above, post-growth EDS analysis indicates that the contents of Ag, In and Sb in the post-growth seed particles of the InSb nanosheets and the InSb nanowires are similar (Table S2). S6 A nucleation process of the InSb nanosheets Figures S11(a) to S11(g) show one vapor-liquid-solid (VLS) and anisotropic lateral growth process of InSb nanosheets. The VLS growth and the anisotropic lateral growth should take place at the same time ( Fig. S11(b)) instead of sequentially as, evidenced by the fact that the nanosheets are very uniform in size, without any tapering and their sizes depend on the growth time as mentioned above. In our work, InSb evolves from thin nanowires to thick nanowires and to nanosheets with only increasing the Sb/In BEP ratio, we conjecture that the very high Sb flux could result in the formation of unstable droplets as reported in literatures [11][12][13] . The unstable droplets could crawl along the direction of triple phase line (one example shown in Figs. S11(b) to S11(g)), which promotes the growth of the InSb nanosheets with different shapes. S7 Summary of the Hall-bar device parameters Two Hall-bar devices (Dev-1&2) were measured in detail in this work. Figure S12 shows the AFM images of Dev-1 and Dev-2. Table S3 summarizes the device parameters, including the Hall bar's length (L), width (W), nanosheet thickness (T), gate-to-nanosheet capacitance calculated from geometric size (C g1 ) and from Hall measurement as discussed in the main text (C g2 ). We note that after the sample is loaded into the dilution fridge chamber, a long, continuous vacuum pumping for more than 48 hours before cooling down is found to effectively reduce the devices' resistances by 1 order. Similar effect has been reported in MBE-grown InAs nanowires which may relate to surface de-adsorption of water molecules under high vacuum 14 . S8 Extraction of the field-effect mobility in a nanosheet In a 2D planar transistor, the S/D current ds I is related to the total charge Q via the equation where d v is the carrier drift velocity and L is the channel length. In equation (1) The derivative of G to gs V is the transconductance g : Note that the measured resistance comprises of two terms: the sample resistance Equations (6) and (8) can be used to fit the on-state gs GV  curve with mobility  , contact resistance c R and threshold voltage th V as fitting parameters. This is the pinch-off trace method used in the main text to extract carrier mobility. S8.1 Hall measurement on Dev-1 In the main text Fig. 5, we have shown the Hall results of Dev-2, which is used to extract the gate-to-nanosheet capacitance g C . Similar measurements were also performed on Dev-1 and the data were shown in Fig. S13. Fig. S13. Hall measurement and field-effect mobility fit of Dev-1. The main panel shows the extracted low-field Hall coefficients R H (red circle) and carrier densities n (blue square) as a function of the back gate voltage V gs . A linear fit to the slope of n-V gs yields a gate-to-nanosheet capacitance C g ~500 μF. The inset shows the field-effect mobility fit of Dev-1 using the same C g value at T = 60 mK. This fit gives a field-effect mobility estimate of ~18,500 cm 2 /Vs. Figure S14 shows Dev-1's field-effect mobility extracted as a function of temperature, corresponding to the gs GV  relation in the main text Fig. 4. Figure S15 shows a similar measurement on Dev-2. Fig. S14. The extracted field-effect mobility as a function of temperature for Dev-1. The mobility shows an increasing trend from ~4,000 cm 2 /Vs at T ~250 K with lowering temperature and has no sign of saturation in the low-T region. Fig. S15. Temperature-dependent transfer characteristics measured on Dev-2. (a) 2-probe G-V gs curves at various temperatures. Starting from 92 K, the on (off)-state conductance shows a decrease (increase) with increasing T, but the G-V gs curve at 60 mK has a much lower on-state conductance compared to the 92 K curve. This may relate to electron-electron interaction and phase-coherent transport processes such as universal conductance fluctuations at low temperatures, which is supported by the more prominent fluctuations seen in the G-V gs curve at 60 mK. (b) Extracted field-effect mobility of Dev-2 as a function of T. Similar mobility values ~4,000 cm 2 /Vs are found for the two devices at T > 250 K, but a slower increase is found for Dev-2.
2018-04-03T03:16:45.168Z
2015-11-21T00:00:00.000
{ "year": 2015, "sha1": "ea29f0327e7dd7a7acb8a94e2831ff3506873caa", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1511.06823", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ea29f0327e7dd7a7acb8a94e2831ff3506873caa", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Materials Science", "Physics" ] }
258228719
pes2o/s2orc
v3-fos-license
Blended Glial Cell’s Spiking Neural Network Spiking Neural Networks (SNNs), the third generation of artificial neural networks, have been widely employed. However, the realization of advanced artificial intelligence is challenging due to the dearth of efficient spatiotemporal information integration models. Inspired by brain neuroscientists, this paper proposes a novel spiking neural network - Blended Glial Cell’s Spiking Neural Network (BGSNN). BGSNN introduces glial cells as spatiotemporal information processing units based on neurons and synapses, and also provides four new network dynamics connection models which extend the information processing dimension, enhance the network global information integration in the spatiotemporal domain, as well as the plasticity of neurons and synapses. In this paper, a BGSNN application - Sudoku solver is designed and implemented on the “WenTian” neuromorphic prototype. On the Easybrain dataset, the BGSNN solver achieves 100% accuracy, outperforming the same structure SNN solver by 97% at the Evil difficulty level, and has faster converges speed compared with the SOTA Sudoku solver LSGA. On the kaggle dataset, the BGSNN solver achieves over 99.99% accuracy, outperforming the publicly available optimal DNN solver under this dataset by 3.82%. In addition, BGSNN exhibits good parallelism and sparsity, decreasing computation by at least 92.9% compared to serial solvers and reducing sparsity by 88% compared to the equal fully dense DNN. BGSNN improves the expression, feedback, and regulation capabilities of neural networks while maintaining the advantages of SNN parallel sparsity, making it simpler to implement advanced artificial intelligence. I. INTRODUCTION The biological brain is a highly intelligent system existing in nature. SNNs adopt the biological neural computing model of the biological brain [1], which is more biointerpretable than the Deep Neural Network (DNN) [2]. The information transmission method of SNNs take the spike form, and enables it to obtain the unique temporal information processing capability, while the sparse firing also gives it low power consumption [3]. Therefore SNNs are considered as network models to achieve more advanced artificial intelligence. However, SNNs that rely solely on neuron dynamics models and synaptic connections have a limited information capacity, and The associate editor coordinating the review of this manuscript and approving it for publication was Qichun Zhang . may have spike errors or even spike disappearance in complex networks due to the decay of spikes during the transmission process, resulting in disconnected neurons not being able to effectively transmit distal information and thus making it challenging to achieve advanced brain-like intelligence. As brain neuroscience research progresses, researchers have discovered and demonstrated that the biological brain relies on more than just neuronal and synaptic networks to process and integrate information as well as achieve advanced intelligence functions. Glial cells, which were once thought to only provide support for the biological activities of the brain, are likely to be one of the key players in the realization of higher brain intelligence functions [4]. The feasibility of such approaches has been corroborated by researchers who have drawn on the regulatory mechanism of brain astrocytes • Proposed a novel spiking neural network, BGSNN, with a global information interaction mechanism and a diverse network structure. It supports global spatiotemporal domain information integration and can plasticize neurons and synapses. This enables feed-forward and feedback optimization of the network, enhancing its expressiveness. To verify the performance of BGSNN, we designed and implemented a Sudoku solver application for BGSNN based on the ''WenTian'' neuromorphic prototype, evaluated on the Easybrain and the kaggle dataset. The experiments show that the BGSNN has excellent parallelism and sparsity performance with the Sudoku solving accuracy exceeding 99% over all datasets. This paper is organized as follows: Section II introduces the related work; Section III introduces the method in detail; Section IV reports the experimental results and analysis and the conclusion is given in Section V. II. RELATED WORKS A. GLIAL CELL BIOLOGY RESEARCH Glial cells are the collective name for a class of neuron cells in the brain. Existing research suggests that the glial cells are effectual for the brain far more in the aspect of offering structural support of metabolic substances for the brain activities. Many studies have found that there is information interaction and regulation between glial cells and neurons through messenger substances such as molecules, ions, or proteins. For example, the receptors of glial cells will affect the synaptic transmission activity of neurons in activated state; thereby affecting the long-term memory of neurons in the hippocampus [15]. Neuronal activity can significantly affect the activity of glial cells, and the cholesterol complexed to apolipoprotein E-containing lipoproteins, and adenosine provided by glial cells can in turn regulate the growth and development of brain synapses [16], [17]. It shows that the whole process of the growth and development of the brain neural network is the result of the interaction between the activity of neurons and glial cells. In addition, researchers have found that information interaction between glial cells also exists through messenger substances such as molecules and ions. Glial cells enhance intercellular communication through the diffusion of Ca 2+ in the glial space [18]. The communication between glial cells is the basis for the brain to distinguish different cognitive properties and is a key element for the brain to achieve cognitive functions [19], [20]. The researchers found that brain glial cells and neuronal synapses are physically connected and affect the function and efficiency of synapses. Neuronal synapses have three-dimensional connection structures of glial protrusions [21], [22], which adhere to the synapses through Ca 2+ permeable glutamate receptors to maintain structural stability [23]. Glial protrusions are capable of transmitting a variety of molecules and ions, such as K + , amino acid transmitters, tumor necrosis factor α (TNFα), ATP, and adenosine [24], [25], [26], which can affect the function and efficiency of synapses, in order to regulate neuronal activity and integrate information in a spatiotemporal domain complementary to neurons [27], [28], [29], [30]. B. SNN PLASTICITY Most of the existing SNN learning algorithms are only related to synaptic parameters, such as synaptic weights, whereas the neuron related parameters in the network are often defined as hyperparameters, which to some extent limits the expressiveness of SNNs. It has been pointed out that there are differences in membrane time constants of neurons in different brain regions [31], [32], [33], and these differences play a crucial role in learning and memory work [34], [35]. Fang et al. proposed to learn membrane time constants along with SNN synaptic weights to achieve enhanced SNN learning [36]. This shows that SNN plasticity can be reflected not only in synapses but also in neurons. C. GLIAL CELLS REGULATE NEURAL NETWORKS The application of glial cells in neural networks has gradually been noticed by researchers, Tang et al. implemented a triple synapse model based on glial protrusions on Intel's neuromorphic chip Loihi, and some special ways of biological brain information processing are realized, like synchronous excitation and local plasticity [5]. Ivanov and Michmizosproposed a neuron-astrocyte liquid state machine (NALSM) [6], which addresses the low performance of LSM through self-organized near-critical dynamics and verifies that the brain-inspired machine learning methods have the potential to achieve comparable performance to deep learning while supporting robust and energy-efficient edge computing. At present, there is no precedent for using glial cells and their network structure in SNN for parameter learning and global spatiotemporal information integration in SNN. Therefore, the purpose of this paper is to apply the glial cells network structure in SNN and elaborate the subjected research. D. NEUROMORPHIC COMPUTING PLATFORM Neuromorphic computing has become a new generation of research boom, and several excellent neuromorphic computing platforms such as SpiNNaker [7], Loihi [8], TrueNorth [9] and Tianjic [10] have emerged in the market. SpiNNaker, which designed by the University of Manchester, is a massively parallel, many-core supercomputer architecture based on a spiking neural network, able to simulate the human brain, providing new possibilities for neuroscience, robotics, and computer science. ''Tianjic'', developed by the Tsinghua University, is the world's first heterogeneous fusion brain-like computing chip, adopts a many-core architecture, reconfigurable components, and a streamlined dataflow with hybrid coding schemes, can support both machine learning algorithms and existing brain-like computing algorithms. This chip, integrating neuroscience and computer science, is expected to enhance the capability of each system, promote the research and development of artificial general intelligence (AGI), and provide a hybrid collaborative development platform for AGI technology. In this paper, the ''WenTian'' neuromorphic prototype developed by Nanjing Institute of Intelligent Technology is selected for the simulation design and implementation of BGNN. The prototype consists of thirty FPGA boards installed on the cabinet in three layers of 10 boards each, forming a 3 × 10 torus (circular surface) topology network. Each individual node in the network, i.e., each board, is a many-core system with the ability to work independently. The ''WenTian'' neuromorphic prototype can support 480 Cortex M4 processors running in parallel, enabling the simulation of millions of neurons and billions of synapses. It also has a supporting programmable simulation softwae system, which only needs to add new network nodes and connection types based on SNN, and adapt the original neuron model to complete the BGSNN simulation environment. Therefore, it has the simulation ability of BGSNN. III. BLENDED GLIAL CELL'S SPIKING NEURAL NETWORK Existing artificial neural networks are composed of two basic units: neurons (network nodes) and synaptic connections (network connections). Inspired by the advanced characteristics of glial cells, a Blended Glial Cell's Spiking Neural Network (BGSNN) is proposed in this paper. In addition to neurons and synapses, BGSNN introduces glial cells and four corresponding network dynamics connection models, which expand the original single information processing dimension of SNN. Glial cells can not only act as both spatiotemporal information processing units to process and transmit information but also as modification units to modify neuronal and synaptic states in the network based on supervised signals and global spatiotemporal information. A. NETWORK STRUCTURE OF BGSNN Organization of all nodes and connections in BGSNN is diagrammed in Fig. 1. Inspired by biological brain research, BGSNN adds a special information processing unit called the glial cell, as well as four types of connections between neurons or synapses: neuronal ion connection, glial ion connection, glial gap connection, and glial protrusion connection. Neuronal ion connections are the channels working between neurons and glial cells, where neurons can release messenger factors to glial cells, and neuronal ion receptors can receive messenger factors transmitted by neuronal ion channels [15], [17]. Glial ion connections are the channels also working between neurons and glial cells, that glial cells can release regulatory messenger factors via channels to neurons, and glial ion receptors can receive the messenger factors transmitted by glial ion channels [4], [27], [30]. Glial gap connections are the channels working between glial cells that there has the mutual transmission of messenger factors between glial cells, and glial gap receptors can receive the messenger factors transmitted by the glial gap channels [18], [19], [20]. Glial protrusion connections are the channels working between glial cells and neuronal synapses that glial cells can transmit regulatory messenger factors to neuronal synapses [21], [22], [23], [24]. Fig. 2 illustrates the general structure of the BGSNN, including the BGSNN neuronal cell populations and connections relationships. in BGSNN, each glial cell population is both a recognizer and a sharer of local area information. B. DYNAMICS MODELS OF BGSNN As scientists make further study on glial cells, corresponding dynamics models of glial cell were proposed which mainly started from molecular and ionic changes [37]. These models, however, oversimplified the mechanism of glial cell activity and hardly reflected the advanced functions of glial cells in information processing. Therefore, the current glial cell biodynamic model is not suitable for directly applied to SNNs, and necessary to build dynamics models by combining the biological properties of glial cells with the engineering properties of neuromorphic computing. As shown in Fig. 3, this paper gives a general glial cell dynamics model framework, which contains three parts: the neuronal spike event processing module, the glial gap event processing module, and the spatiotemporal information integration processing module. The main structure of this framework is consistent with the biological structure of glial cells, and the designer can customize the dynamic algorithm of each module according to different engineering requirements. Considering that BGSNN adds the structure of glial cells, the framework of neuron dynamics model will change accordingly, as shown in Fig. 4. Based on the original neuronal spike event processing module, the neuron dynamics model framework adds three parts: a glial ion event processing module, a glial protrusion event processing module, and a spatiotemporal information integration and processing module. The neuron dynamics model and the glial cell dynam-ics model constitute the computing model of BGSNN. The details of two models will described in the next subsection. The glial cell dynamics model is a time-driven model, different from the event-driven model of neurons. Glial cells perform corresponding operations, such as making state judgments and spike firing, within a specific time cycle. Its information processing and responding time cycle is shown in Fig. 5. A complete cycle T g is divided into four subperiods: T ng (information processing period from neural network to glial cells), T gg (information processing and response period from glial network to glial cells), T gn (information response period from VOLUME 11, 2023 glial cells to neural network), T re (glial cell non-response period). During the T ng the period, the glial cells only process the input neuronal spiking information, i.e., obtain the activity information of local neuron populations; during the T gg period, glial cells diffuse valuable activity information of local neuron populations through glial gaps, and receive activity information of neuron populations diffused from glial cells in other regions; during the T gn period, glial cells integrate the activity information of the local neuronal population with those of other regions, and regulate the local neuron population activity accordingly; during the T re period, glial cells are resting. Taking a single-layer feedforward neural network as an example, as in Fig. 6, the BGSNN network computation model can be expressed mathematically as follows according to dynamic models. a. Neuronal Spike Event Processing Module This module operates during the T ng period and is responsible for processing the neuronal spike events received by glial cells through the neuronal ion connection channel. And the glial cell obtain the input current in T ng period through this connection channel, as shown in (1): where I ng (t) denotes the input current value of neuronal ion channels in the glial cell; w ng denotes the weight of the neuronal ion connection; and S i denotes the output spike of neuron. b. Glial Gap Event Processing Module This module operates during the T gg period and is responsible for processing glial gap events received by the glial gap connection channel, and the glial cell obtain the input current in time period T gg through this connection channel, as shown in (2): where I gg (t) denotes the input current value of the glial gap channel of the glial cell; w gg denotes the weight of the glial gap connection; and O gg denotes the glial gap output event of glial cell, which is the output by the spatiotemporal information integration processing module of the glial cell. c. Spatiotemporal Information Integration and Processing Module The module can output three different events in the form of spike, including glial ion events, glial gap events, and glial protrusion events. The output results can be expressed in Fig. 7 as (3): where O gg denotes the output of glial protrusion channels, i.e., glial protrusion events; O gn denotes the output of glial ion channels, i.e., glial ion events; O gp denotes the output of glial gap channels, i.e., glial gap events; I ng denotes the input current value of the neuronal ion channel in glial cells; and I gg denotes the input current value of the glial gap channel in glial cells. where: • Glial ion events: generated during the T gn period, glial cells transmit glial ion events to neurons through glial ion connection channels, which can be used to modulate neuronal plasticity. • Glial gap events: generated during the T gg period, glial cells diffuse glial gap events between glial cells through glial gap connection channels, which can be used for global information interaction. • Glial protrusion events: generated during the T gn period, glial cells transmit glial protrusion events to neurons via glial protrusion connection channels, which can be used to modulate the plasticity of neuronal synapses. 2) NEURON DYNAMICS MODEL a. Neuronal Spike Event Processing Module As the spatiotemporal information computing unit, after the neuron receives the spike event. The input of the neuronal spatiotemporal information integration and processing module is obtained through calculation, which can be expressed as (4): where w i represents the weight of the ith fan-in synapse, S i representing the input spike event. b. Glial Ion Event Processing Module After the neuron receives the glial ion event through the glial ion connection channel, it will correspondingly change its state parameters to realize neuron plasticity, as shown in (5): where P n (t) denotes dynamic parameters of the neuron adjusted by glial ions at time t; w gn i denotes the weight of the glial ion connection; O gn i denotes the glial ion output event of the glial cell, given by the glial ion event in the spatiotemporal information integration processing module of the glial cell model. c. Neuron Spatiotemporal Information Integration and Processing Module This module receives the processed neuron spike input and neuron dynamic parameters adjusted by glial ion events, then integrates and processes their information. As shown in Fig. 8, outputs neuronal spike events can be expressed as (6): where O n (t) represents the neuronal spike event output by the neuron; P n (t) represents dynamic parameters of the neuron adjusted by glial ions; N n (t) represents the neuron spike input; F represents the neuron model. d. Glial Protrusion Event Processing Module After the neuron receives the glial protrusion event through the glial protrusion connection channel, it will adjust the weight of the neuron synapse, i.e., the plasticity of the neuronal synapse, as shown in Fig. 9 can be illustrated by (7): where W (t) represents the synaptic weight of the neuron at time t; W (t −1) represents the synaptic weight of the neuron at time t-1; O gp denotes the glial protrusion event of the glial cell, given by the spatiotemporal information integration processing module of the glial cell model. w gp denotes the weight of glial protrusion connection. C. SPATIOTEMPORAL INFORMATION PROCESSING INTEGRATION AND PLASTICITY PRINCIPLE OF BGSNN This section will describe the mechanism of BGSNN for information processing and integration in the spatiotemporal domain, and explain how BGSNN implements the neural node and connection modification in the network. The spike transmission of BGSNN is shown in Fig. 10. Without glial cells, spikes can only be transmitted from presynaptic neurons to postsynaptic neurons via axons, synapses, and dendrites, and feedback transmission is impossible. With the introduction of glial cells, spikes can be secondary processed by glial cells and transmitted by feedforward or feedback networks, giving the networks the ability to integrate spatiotemporal information and plasticity. In feedforward networks, spikes can be transmitted to fan-out synapses or postsynaptic neurons by feedforward networks1 or 2 via glial gap connections and glial cells, giving the BGSNN the ability for secondary deep processing of spatiotemporal information and feedforward optimization or plasticizing of the network. In feedback networks, spikes fired by the postsynaptic neuron can be transmitted to the fan-in synapse via feedback network 1, which is composed of glial cells and glial protrusion connections, to adjust the parameters of the fan-in synapse; or to the presynaptic neuron via feedback network 2, which is composed of glial cells and glial gap connections, to adjust the parameters of the presynaptic neuron. Therefore, BGSNN has the capabilities of feedforward and feedback optimization or plasticizing of the network. The processing flowchart of the BGSNN for spike events is shown in Fig. 11. For a complete glial cell, the neuronal ion channel is the direct sensing channel for short distance coupling of spatiotemporal information; the glial gap channel is the indirect sensing channel for distant coupling of spatiotemporal information; the glial cell is the smallest unit for integration and processing global spatiotemporal information of the network; the glial ion channel or glial protrusion channel is the direct optimization channel of glial cells to the neuronal network. To summarize the BGSNN, it adds the glial cell as a special information processing unit and four new types of connections between neurons or synapses. These changes enable BGSNN to have the capabilities of global spatiotemporal information integration, feedforward and feedback network optimization, enhancing the expression ability and the coupling of networks. Because of the addition of the glial cell structure, BGSNN supports not only the general SNN training method but also the direct network adjustment through network feedback, so that inference can be performed directly without training. IV. EXPERIMENTS AND RESULTS In this section, an application example is designed and implemented to demonstrate the spatiotemporal information integration and plasticity of BGSNN. According to the application requirements, the glial cell dynamics model and neuron dynamics model are instantiated, and the details can be found in Appendix A. A. BGSNN APPLICATIONS-SUDOKU PUZZLE SOLVER Sudoku is an NP-complete problem [38], as shown in Fig. 12. The Sudoku board is composed of 9 blocks, each of which in turn consists of 9 cells. The rule is to infer the numbers of all remaining spaces based on the known numbers on the 9 × 9 board and to satisfy that the numbers in each row, column, and block contain 1-9 without duplication. Every qualified Sudoku puzzle has one and only one answer, and the inference is based on this; any unsolved or multiple-solution puzzle is ineligible. When solving Sudoku problems, people try to fix a number in a cell choosing from 1 to 9 according to the rules. This process can be thought of as accumulating each number's possibility in the cell, and the cell will be filled with the number with the highest probability. This process of accumulation fits well with the computation mechanism of spiking neural networks. At the same time, Sudoku problems are a class of problems that require local and global synergy, which can highlight the performance improvement of glial cell structure brought by global spatiotemporal information integration and plasticity. Therefore, we designed a Sudoku puzzle solver based on BGSNN as a case study to demonstrate the implementation of BGSNN for intelligent decision-making and its effectiveness. The software and hardware of the neuromorphic platform are designed for spiking neural networks with good parallelism, so to ensure the optimal performance of the BGSNN solver, we deployed the BGSNN solver on the ''WenTian'' neuromorphic prototype for experiments. The population structure of the BGSNN Sudoku solver network is shown in Fig. 13A, which consists of a total of three populations: 1) the Sudoku initial input neuron population; 2) the Sudoku board neuron population; 3) the Sudoku analysis and decision-making glial cell population. The Sudoku initial condition input neuron population establishes a one-to-one excitatory connection (one_to_one_excite(input)) to the Sudoku board neuron population, allowing the initial condition to be represented on the Sudoku board. For the Sudoku board neuron population, describes rules of the Sudoku game by establishing inhibitory connections (sudoku_rule_inhibit) within the populations. For the Sudoku analysis and decision-making glial cell population, enables glial cells to make a rule-based analysis of the current state of the Sudoku board by establish excitatory connections (sudoku_rule_excite) internally that satisfy the rules of Sudoku. The one-to-one excitatory connection (one_to_one_excite) established from the FIGURE 13. BGSNN Sudoku solver. A) Sudoku solver network population structure. B) Sudoku solver network structure (with the number ''1'' as an example): the Sudoku initial input neuron population will input the initial condition ''1'' into the known cell, then the ''1'' in the same block, line and column will be inhibited, and the remaining numbers in the cell except ''1'' will also be inhibited. The ''1'' in this cell transmits excitatory signals to the corresponding glial cell neurons, causing the glial cells to adjust accordingly, i.e., the ''1'' in the same block, line, and column is excited, and the rest of the numbers in the cell except for the ''1'' are also excited. And it also receives excitatory signals from the corresponding glial cells to make adjustments to the Sudoku board. Sudoku board neuron population to the Sudoku analysis and decision-making glial cell population can transmit the state of the Sudoku board to glial cells for the perception of the local state of the Sudoku board by glial cells. The oneto-one excitatory connection (one_to_one_excite (adjust)) established from the Sudoku analysis and decision-making glial cell population to the Sudoku board neuron population enables to regulate the state of the Sudoku board neuron population. The Sudoku solver network structure (with the number ''1'' in a cell as an example) is shown in Fig. 13B. The BGSNN Sudoku solver can directly infer to obtain solution results without network model training. Since there is no published research data related to SNN solving Sudoku, this paper selects Sudoku solvers of traditional algorithms, deep learning algorithms and DNN methods as experimental control groups for comparison experiments. 1) EASYBRAIN DATASET The EasyBrain website provides five difficulty levels of Sudoku puzzles: easy, medium, hard, expert, and evil. Sudoku puzzle solving process: A) Sudoku puzzle initial conditions. B) spiking activity of Sudoku puzzle solving. Until a solution to the Sudoku puzzle is solved, glial cells will fire spikes with Tg as the interval. The neurons that correlate to the board will stay active throughout the solving process, while glial cells will help the equilibrium state arrive more quickly. When the equilibrium state is attained, only the neurons matching the solution's corresponding number will fire, leaving the other neurons inactive. C) solution of Sudoku puzzle. solver; and to verify the superiority of parallel computing of BGSNN. a. Glial Cells Necessity Evaluation A SNN Sudoku solver, as a control group, was designed based on the network structure of the BGSNN Sudoku solver. This SNN network only lacks the glial cell module compared to the BGSNN network, and the schematic diagram of the network structure is shown in Fig. 14. We took a Sudoku puzzle of hard difficulty level as an example as shown in Fig. 15. The Sudoku anal-ysis and decision-making glial cell population were adjusted for 15 iterations to finally bring the Sudoku board neuron population to the expected state, i.e., the correct solution. The experiment uses the Easybrain dataset to run the BGSNN solver and the SNN solver respectively, then recorded the average computational accuracy of Sudoku solvers at different difficulty levels, as shown in Table 1. It can be seen from the Table 1 that the SNN Sudoku solver is difficult to solve correctly at the Hard level and above, while the BGSNN Sudoku solver can solve correctly at all difficulty levels under the same conditions with a stable accuracy of over 99%. Evil had an average of 1.71 fewer clue values than Expert difficulty level, but the SNN Sudoku solver accuracy was reduced by 20.41%. It is proved that the accuracy of SNN solver without global regulation depends greatly on the number of initial clues. The difficulty in transmitting information throughout the entire board to achieve overall balance without global regulation may be the cause of this situation since there are too many cells to fill in and a great distance to cover. The only difference between the two Sudoku solvers was the glial cell structure. Therefore, the experiments showed that the global information processing and the plasticity of neurons brought by glial cells can effectively help the network converge to the correct answer, which verified the necessity of the glial cell structure. b. Evaluation of accuracy over runtime The BGSNN Sudoku solver is a time-driven network solver, and the network completes the solution task through periodic iterations. To show how the accuracy of the BGSNN Sudoku solver varies with runtime, we set the runtime to ten intervals, each interval is 30 in length, and a runtime has 1ms step size. Using the Easybrain dataset, for each difficulty level, the accuracy with which the BGSNN solver can fully solve for the correct answer in each interval is counted. The BGSNN Sudoku solver was tested on the dataset with 30 puzzles per difficulty level, for a total of 750 trials (each puzzle was repeated 5 times). Fig. 16 shows the accuracy of the BGSNN solver in each interval that the answer can be completely and correctly solved. The accuracy of the BGSNN solver increases with the increase of runtime, and the most difficult level, Evil, requires at most 300 runtimes to complete the solution. Since the probabilistic stochas-tic down-sampling function in the glial cell dynamics model used in the experiment (see Appendix A for details), there is stochastic in each initialization, and poor initialization will lead to an increase in solution time. Therefore, the accuracy curve is not completely smooth, but maintains an upward trend. As with other iterative algorithms, the accuracy of BGSNN increases with the number of iterations until the task is completed. c. the BGSNN Sudoku Solver Performance Evaluation The BGSNN Sudoku solver is compared with SOTA's Sudoku algorithm for performance evaluation. LSGA [41] is an evolutionary algorithm to solve Sudoku and performs well in terms of accuracy and convergence speed. The dataset used by LSGA in the original paper is WebSudoku [42], which has four difficulty levels: Easy, Medium, Hard, and Evil, and the mean values of clues for each difficulty level are 36.25, 30.75, 27.63, and 25.62, respectively. According to the difficulty levels, this dataset can correspond to the first four levels of Easybrain. In the original paper, the number of generations needed by LSGA to obtain the optimal solution is used as an evaluation criterion, and this data is independent of the hardware. BGSNN is a time-driven network solver that iterates according to the period, and each iteration updates the network state and parameters until the correct solution is found, while the number of iteration periods is hardware-independent. In the comparison experiment, we consider one complete cycle of BGSNN as a generation and evaluate the performance with 100% accuracy guaranteed. In this experiment, the BGSNN solver was tested on the dataset for a total of 750 times (5 repetitions per puzzle). The test results were compared with the published test results of the LSGA solver, as shown in Table 2. From the comparison results, it can be seen that the LSGA solver requires a higher number of generations to find the correct solution than the BGSNN Sudoku solver at all difficulty levels, and the BGSNN can solve puzzles at higher difficulty levels. This proves that the BGSNN Sudoku solver can converge faster with guaranteed accuracy. d. Parallelism Evaluation The BGSNN Sudoku solver experiments run on the ''Wentian'' neuromorphic prototype, which supports parallel computing. Parallel computing has significant advantages over serial computing in multi-loop complex problems, saving significant runtime and reducing algorithmic time complexity. The backtracking algorithm uses depth-first search to solve the Sudoku puzzle, traversing all the cells until the correct solution is obtained, which is a typical serial logic algorithm; BGSNN supports parallel computing and can calculate multiple neuron spikes at the same time. In this paper, a set of control experiments were designed to compare the number of loops required by the solver when solving the same Sudoku puzzle to verify the advantages of parallel computing in time complexity. There was no random function in the backtracking algorithm solver, and there was only one loop count when a puzzle was completely solved. Since there was a probabilistic random downsampling function in the spatiotemporal information integration and processing module of the glial cell dynamics model used by the BGSNN solver, it may have a different number of loops for the same puzzle. To make the results comparable, for one puzzle, the experiment was repeated 10 times using the BGSNN solver, and the average number of loops was compared with the results of the backtracking algorithm solver. In particular, the glial cell dynamics model in BGSNN was time-driven, and one timestep could correspond to one loop in the backtracking algorithm during hardware execution. Experimental results are shown in Table 3. In this experiment, to ensure the accuracy and stability of the solution results, we assumed that the BGSNN Sudoku solver correctly solved this puzzle when the same result could be output stably for three consecutive glial cell response cycles (T g ). Taking this experiment as an example, T g was set to 900ms, so the number of BGSNN in the table was equal to the number of loops required for the BGSNN solver to obtain the correct solution, plus the length of the two additional response cycles, that is, 1800. As can be seen from Table 3, BGSNN can reduce the computation by at least 92% through parallel computing, which helps improve the computing speed and reduce the computing cost. 2) KAGGLE DATASET This dataset contains 1 million Sudoku puzzles with corresponding solutions. After statistics, the maximum clue value for this dataset was 37, the minimum was 29, and the average was 33.81279. In this paper, three sets of controlled experiments are conducted to verify the accuracy of the BGSNN Sudoku solver on a large scale dataset; to verify the performance of BGSNN compared with DNN on the same scale dataset; to verify the superiority of BGSNN sparse computing. a. Accuracy Evaluation of the BGSNN Sudoku Solver The experiment ran the BGSNN Sudoku solver on this 1 million scale dataset, recorded the result of each solution and the solution accuracy. The correct solution rate is shown in Fig. 17. According to the experimental findings, the solution accuracy of the BGSNN Sudoku solver can reach 99.9999% with this dataset. In the 1 million-scale dataset, the solution accuracy of two puzzles was less than 100%, which were 90.1235% and 85.1852% respectively. In this experiment, the upper limit of the running time was artificially set to 20000ms. If the upper limit was reached, the solver would automatically terminate. Meanwhile, there is a probabilistic stochastic down-sampling function in the glial cell dynamics model used in the experiment (see Appendix A for details). When the initialization is poor, the solution time will increase accordingly. In response to the above, three additional control experiments were conducted for the two puzzles, and the statistical solution accuracy was 100%. Therefore, it was analyzed that in the first large-scale validation experiment, the case where the accuracy does not reach 100% may be due to the poor random initialization, resulting in the runtime exceeding the upper limit. b. the different Approach Sudoku Solver Performance Evaluation Since the BGSNN Sudoku solver is a network approach solver, several representative Sudoku solvers are chosen from the various Sudoku solving algorithms released in the kaggle dataset to evaluate the performance of the BGSNN solver. The accuracy of the backtracking solver is up to 100% [43] with O(n 7 ) time complexity, but its computation is substantially greater than that of the BGSNN solver. This analysis has been described in subsection c of the Easybrain dataset. ''Solve Sudoku with CNN acc 97%'' [44] is currently published the DNN solver with the highest accuracy except for the backtracking algorithm. As the control group, it used an 11-layer network structure, including 8 convolutional layers, 2 fully connected layers, and 1 softmax classification layer. The solver used 12 epochs for training, each epoch size was 12500, and the final training accuracy is 0.9617. The compared results are shown in Table 4. The table demonstrates that BGSNN outperforms DNN in terms of accuracy, the number of layers, and parameter quantity. The final solution accuracy of the DNN model depends on the training data set and training algorithm. If the data set or training algorithm is inappropriate, the final accuracy will decrease. BGSNN is different from DNN, the model supports directly infer, so the final solution accuracy does not depend on the training process. When the data samples are insufficient, BGSNN can effectively resolve the problem that the model cannot be trained, and provides a new solution for small sample scenarios. In addition, we chose ''Sudoku Solver -97.45% Accuracy on 1M Games'' [45], which employs a systematic approach and achieves 97.45% accuracy without using neural networks, and its time complexity is O(n 6 ). However, this is still less than the 99.99% accuracy of the BGSNN Sudoku solver. c. Sparsity Evaluation BGSNN transmits information in the same form of spikes as SNN, and the sparse spike firing makes its low computation. The BGSNN Sudoku solver runs on the ''Wentian'' neuromorphic prototype, which supports sparse matrices instead of full matrices in computing. Ran the BGSNN Sudoku solver on this 1 million size dataset, and recorded all its sparse matrices involved and the corresponding full matrices. The difference between the average computation of the two was counted as the reduction of computation by sparse computing of BGSNN compared with the equal fully dense DNN. The experimental results are shown in Table 5. According to the experimental findings, BGSNN can reduce roughly 88% of the computation through sparse computing, which contributes to lowering computing costs and increasing computing speed. V. CONCLUSION In Summary, this paper proposes a novel Spiking Neural Network, called Blended Glial Cell's Spiking Neural Network (BGSNN), which is inspired by the glial cells. BGSNN introduces glial cells and four corresponding network dynamics connection models, thus realizing the modification of neural node and synaptic connection, improving the capability of the network in deep local information processing and global information integration, and enhancing the expression abilities of the network. In experiments, based on the ''WenTian'' neuromorphic prototype, this paper instantiates dynamics models of glial cell and neuron based on the application scenario, and design the BGSNN Sudoku solver and series of experiments to evaluate on the Easybrain and kaggle datasets. The solution accuracy of the BGSNN solver exceeds 99% on all datasets. With 100% accuracy, the BGSNN solver improves accuracy by over 97% over the same structure SNN solver at the evil difficulty level in the Easybrain dataset, demonstrating the need for glial cells. In the performance evaluation, compared with the SOTA Sudoku solver LSGA, the BGSNN Sudoku solver has a faster converge speed. It also achieves over 99% accuracy on the million-level dataset, which is 3.82% better than the publicly available optimal DNN solver on the same dataset. BGSNN performs well in sparsity and parallelism experiments. BGSNNN improves sparsity by about 88% over the equal fully dense DNN and reduces computation by at least 92.9% compared to the serial logic algorithm. During the experiments, it has been demonstrated that in the absence of multi-dimensional information processing mechanisms and the global information interaction mechanism, neurons can only communicate with the neurons connected to them. When the network size is large, this kind of transmission process limits disconnected neurons from efficiently transmitting the distal information since there is attenuation in the transmission of spikes. BGSNN has the ability of global regulation and secondary processing of information while solving the bottleneck of the single-dimensional information processing mechanism, which can significantly improve the quantity and quality of information. And it has a more diversified network structure while maintaining the advantage of SNN parallel sparsity, which can effectively improve the expression ability of the network and realize more advanced artificial intelligence. APPENDIX A INSTANTIATION OF BGSNN The BGSNN variant GLIF model of the LIF (Leaky integrateand-fire model) model [46] was chosen as the neuronal dynamics model applied in the experiments of this paper, and a mean disbursement strength encoding bandpass conduction error feedback glial dynamics model (mEIC_bPT_eFB) was proposed. The parameter setting of this model is shown in Table 6. A. GLIF MODEL 1) NEURONAL SPIKE EVENT PROCESSING MODULE As the spatiotemporal information computing unit, after the neuron receives the spike event, the spike input of the neuronal spatiotemporal information integration and processing module is obtained through calculation, which can be expressed as (8): where w i represents the weight of the ith fan-in synapse, S i representing the input spike event. 2) GLIAL ION EVENT PROCESSING MODULE After the neuron receives the glial ion event through the glial ion connection channel, it will correspondingly change its state parameters to realize neuron plasticity, as shown in (9): where P n (t) denotes the dynamic parameters of neurons adjusted by glial ions at time t; P n (t − 1) denotes the dynamic parameters of neurons adjusted by glial ions at time t-1; w gn i denotes the weight of the glial ion connection; O gn i denotes the glial ion output event of the brain glial cell, given by the glial ion event in the spatiotemporal information integration processing module of the glial cell model. 3) NEURON SPATIOTEMPORAL INFORMATION INTEGRATION AND PROCESSING MODULE The mathematical expression of the module is shown in (6), where the neuron model is chosen as the LIF model, whose first order differential equation is described in (10): where τ m = R m C m is called the membrane time constant; I is the sum of the synaptic currents generated by the firing behavior of the individual presynaptic neurons. When the membrane potential V is greater than or equal to the threshold potential, the neuron immediately generates excitation and fire a spike that accompanies the conduction of the action potential, while resetting the membrane potential to V reset , and holding it during the absolute refractory period (T r ). 4) GLIAL PROTRUSION EVENT PROCESSING MODULE After a neuron receives a glial protrusion event through the glial protrusion connection channel, the weight of the neuron synapse changes as shown in (11): where W (t) represents the synaptic weight of neurons at time t; W (t − 1) represents the synaptic weight of neurons at time t-1; O gp denotes the glial protrusion events of glial cells, given by the glial protrusion events in the spatiotemporal information integration processing module of the glial cell model. w gp denotes the glial protrusion connection weights. B. mEIC_bPT_eFB MODEL Based on the framework of brain glial cell model and combined with practical engineering experience, this paper proposes a mean disbursement strength encoding bandpass conduction error feedback glial dynamics model (mEIC_bPT_eFB). The driving method of this model adopts a time-driven approach, i.e., the state judgment and event firing are made only when a specific time period arrives. 1) GLIAL PROTRUSION EVENT PROCESSING MODULE I ng denotes the input current value of neuronal ion channels in glial cells, as shown in (12): where t denotes the current time, T ng denotes the information processing period from the neural network to the glial cells; T gg denotes the information processing period from the glial network to the glial cells; T gn denotes the information processing period from the glial cells to the neural network. 2) GLIAL GAP EVENT PROCESSING MODULE I gg denotes the input current value of glial gap channels in glial cells, as shown in (13): I gg = 0, t%T g < T ng ∪ t%T g > (T ng + T gg ) II (t, T gg ), t%T g ≥ T ng ∩ t%T g ≤ (T ng + T gg ). (13) where t denotes the current time, T ng denotes the information processing period from the neural network to the glial cells; T gg denotes the information processing period from the glial network to the glial cells; T gn denotes the information processing period from the glial cells to the neural network. The channel input current values II (t, T ) in (12) and (13) are used to extract information about the input spike events, which are encoded by the average release intensity as shown in (14): where ⃗ s(t) is a 1 × k-dimensional vector representing the k fan-in connections of the glial cell at the current moment of binary spike input, where the convention is ''0'' for no spike and ''1'' for the opposite. ⃗ W is a k × 1-dimensional vector representing the weight size of the k fan-in connections of the glial cell, and T indicate the time window size. 3) SPATIOTEMPORAL INFORMATION INTEGRATION AND PROCESSING MODULE The mathematical expression of the mEIC_bPT_eFB model is (15), as shown at the bottom of the page, where ⃗ O glial is the output vector of glial cells, O gg for the output of glial gap channels, O gn for the output of glial ion channels, and O gp for the output of glial protrusion channels. V m (I ng , I gg ) in (15) is the information integration function of glial cells, which is used to represent the membrane potential of glial cells, as shown in (16), where R ng is the neuronal ion channel input resistance and R gg is the glial gap channel input resistance. T %T g > (T ng + T gg ) mSIC(x) in (15) represents the average pulse release intensity encoding function, as shown in (17): where x denotes the value to be encoded at the current moment of the type of channels, and pS(x) is a function that determines with probability whether to fire a spike or not, and is used to encode the information to be delivered in the form of a spike event, as shown in (18). Where V sa denotes the saturation threshold and RAND(0, 1) is a function that generates a random number in the range (0,1). pS(x) = 0, x/V sa < RAND(0, 1) 1, x/V sa ≥ RAND(0, 1). (18) bP(x) in (15) is a bandpass filter function for processing local spatiotemporal domain information i.e., to obtain valid local neuronal population activity information and diffuse it to other regions of the glial cells. The trapezoidal filter function is used here, as shown in (19), by setting the a, b, c, d parameter, the glial cells can select or filter the specific neuron activity information. In (15), eFB(i, v) is the error feedback calculation function, which is used to process the global spatiotemporal information, and in turn adjust the local neuron population activity, as shown in (20); where S(i) is the switching function, as shown in (21), I th is the effective current threshold of the glial gap input; where D(v) is the error correction function, as shown in (22), P is the feedback polarity coefficient, and V e is the expected average spike firing intensity value, V threshold is the minimum effective information threshold.
2023-04-20T15:10:29.099Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "6521caf2126e6fc920153beba45b0ea4493888db", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/10103904.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "04c99170aeeca8280b8efc67b9c8a81471e75a63", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
240821316
pes2o/s2orc
v3-fos-license
Determinants of age at first marriage in Ethiopia using 2016 Ethiopian Demographic and Health Surveys: Application of Cox Proportional Hazard model Background Although, determining age at first marriage has a various socioeconomic and demographic implications, available information in Ethiopia is nearly scanty. Methods We used data from the 2016 Ethiopian Demographic and Health Survey. The survival information of a total of 15,683 women with reproductive age (15-49) were examined. Cox proportional hazards regression model was employed to identify the determinants of age at first marriage. Results The highest probability of getting age at first marriage occurred in the early age of women, whereas the probability of getting age at first marriage were lower in the later age of women. In Ethiopia more than 44% of women were married before turning 18 years. The median age at first marriage was 17 years old. According to multivariate Cox proportional hazard model, significant variables of age at first marriage were; women’s educational level with secondary and above (HR=0.9, 95% CI: 0.80-0.92), regions; Tigray (HR=1.6, 95% CI: 1.43-1.75), Afar (HR=1.6, 95% CI: 1.43-1.78), Amhara (HR=1.6, 95% CI: 1.46-1.79), Oromia (HR=1.3, 95% CI: 1.18-1.45), Somalia (HR=1.4, 95%CI: 1.26-1.57), Benishangul (HR=1.4, 95%CI: 1.25-1.55), Southern Nations, Nationalities, and Peoples' Region (SNNPR) (HR=1.4, 95%CI: 1.29-1.60), Gambela (HR=1.5, 95%CI: 1.30-1.62), Harari (HR=1.2, 95% CI: 1.10-1.35) & Driedawa (HR=1.3, 95% CI: 1.20-1.47), women’s from rural residence (HR=1.1, 95% CI: 1.03-1.18), Muslim women’s (HR=1.1, 95% CI: 1.0-1.20), getting first marriage who had first sex <15 years (HR=2.6, 3 significant determinants of age at first marriage. Women's with lower education, residing out of Addis Ababa and rural Ethiopia, being Muslim, who had first sex <15 years old and five & more members in the household deserve special attention. Background Marriage is a recognized union of spouses; the recognition is may be either legally or socially. Marriage is formalized by a weeding or a small ceremony at the community or family level [1]. It is also an important family organization for the individual and the communities at large. Marriage is unforgettable events in one's life cycle, besides the most important thing for family formation or family foundation process [2]. Individuals may marry for different reasons, including religious purpose, financial, emotional, social or legal purposes. The marriage is not only the individual desires it might be influenced by the family choices, perspective marriage rules, societal influences or influenced by genders. In some parts of the world, forced marriage and early marriage practiced as a culture or tradition, on the other hand, such practice is penalized by law out of the women's right, or children (both male and female) rights because of the international law [3]. Child marriage is a type of marriage below the age of 18 years. The marriage is carried out before the children are physically, financially, physiologically and psychologically ready to shoulder the responsibilities of marriage, the way how to manage the families and childbearing [4].Children lost a lot of things, including educational opportunities, leads to poverty, negative impact on health and decision making capacities [5]. The proportion of girls marrying as children is declining at the global level, and there has also been progress in Ethiopia in recent decades. But, still marry before turning 18 is a problem [6,7]. Therefore, the aims of this study was to examine the effect of sociodemographic factor on age at first marriage among reproductive women in Ethiopia. 4 Study design and setting The dataset used in this study was obtained from MEASURE DHS database at http://dhsprogram.com/data/ for Ethiopian Demographic and Health Survey conducted in 2016, which is the fourth comprehensive survey. It was a population-based cross-sectional study conducted across the country.Administratively, Ethiopia is divided into nine geographical regions (Tigray, Afar, Amhara, Oromia, Somali, Benishangul-Gumuz, SNNPR, Gambella and Harari) and two administrative cities Addis Ababa and Dire dawa. Sampling procedures The 2016 EDHS sample was stratified and selected in two stages. Each region was stratified into urban and rural areas, yielding 21 sampling strata. Samples of Enumeration Areas (EAs) were selected independently in each stratum in two stages. Implicit stratification and proportional allocation were achieved at each of the lower administrative levels by sorting the sampling frame within each sampling stratum before sample selection, according to administrative units in different levels, and by using a probability proportional to size selection at the first stage of sampling. In the first stage, a total of 645 EAs (202 in urban areas and 443 in rural areas) were selected with probability proportional to EA size based on the 2007 population housing census and with independent selection in each sampling stratum. A household listing operation was carried out in all of the selected EAs from September to December 2015. The resulting lists of households served as a sampling frame for the selection of households in the second stage. Some of the selected EAs were large, consisting of more than 300 households. To minimize the task of household listing, each large EA selected for the 2016 EDHS was segmented. Only one segment was selected for the survey with probability proportional to segment size. Household listing was conducted only in the selected segment; that is, a 2016 EDHS cluster is either an EA or a segment of an EA. In the second stage of selection, a fixed number of 28 households per cluster were selected with an equal probability systematic selection from the newly created household listing. All women age 15-49 who were either permanent residents of the selected households or visitors who stayed in the household the night before the survey were eligible to be interviewed. A representative sample of 18008 households were selected for the 2016 EDHS. In the interviewed households, 16,583 eligible women were identified for individual interviews. Interviews were completed with 15,683 women, yielding a response rate of 95% [4]. Study Variables The dependent variable was age at first marriage measured in complete years, while the independent variables were educational level of women's, region, place of residence, household has radio, household who had television, religion, wealth index, age at first sexual intercourse, occupational status and family members in the household. Statistical Analysis Both descriptive and Cox regression model were applied for statistical analysis of age at first marriage using Statistical Package for Social Science (SPSS) version 23 software. Variables with a p-value of <0.25at the univariate Cox regression analysis were included into multivariable Cox regression model in which hazard ratio with 95% confidence intervals were estimated to identify the determinants of age at first marriage. Besides, Pvalues < 0.05 were employed to declare the statistical significance. Results Background characteristics of women 6 Table 1 shows the distribution of women by selected socio-demographic predictors of age at first marriage in Ethiopia. In this study, a total of 15,683 women were included. From the total women, 7638(48.7%) of them had work, while 8045(51.3%) of them had not work. In this study 6029(38.4%) of the household had <5 family members and 9654(61.6%) of the household had ≥5 family members. Median age at first marriage was two years older among urban women than rural women, it varies by region from 15 years among women in Amhara to 20 years among women in Addis Ababa The median age at 7 first marriage increases with increasing education, from 16 years among women with no education to 19 years among women with more than a secondary education. A one year older increases of age at first marriage was observed among women with rich wealth index group than women with poor wealth index group. Kaplan-Meier estimates for overall all age at first marriage in Ethiopia The overall estimates of the Kaplan-Meier survivor function showed that, the highest probability of getting age at first marriage were occurred in the early age, whereas the survival function was declined when the age of women increases, suggested that, the probability of getting marriage were lower in the later age of women (Figure 1). Cox Proportional Hazards Regression Model Results Univariate analyses were done to assess the association between all important predictors and age at first marriage. The overall categorical variables with a p-value of <0.25 at the univariate analysis were included into multivariable Cox proportional hazard model in which hazard ratio with 95% confidence intervals were estimated to identify the predictors of age at first marriage. P-values less than 0.05 were employed to declare the statistical significance. These are detailed below (table 2). The estimated hazard ratio for women with above secondary education was 0.9(95% CI: 0.80-0.92), compared to non-educated (reference category). It was found that the risk of getting first marriage was 0.9 times lower than women with non-educated. The hazard of getting first marriage was 1.1 times higher in rural women's than women who lived in urban, since the estimated hazard ratio for women who lived in rural area was 1.1(95% CI: 1.03-1.18), compared to women who lived in urban (reference category). The estimated hazard ratio for Muslim women was 1.1(95% CI:1.0-1.2), compared to Orthodox (the reference category). It was found that the hazard of early marriage was more pronounced among Muslim than Orthodox or the risk of getting first marriage for Muslim was 1.1 times higher than Orthodox. It was found that the risk of getting first marriage for women who had first sex less than 15 years old was 2.6 times higher than women had first sex in between 15 to 19 years old, whereas the risk of getting first marriage for women who had first sex above 20 years old was 0.36 times lower than women who had first sex in between 15 to 19 years old, since the estimated hazard ratio for women who had first sex less than 15 years old and women who had first sex above 20 years old were 2.6(95% CI: 2.44-2.69) and 0.36(95% CI: 0.35-0.39), compared to women who had first sex in between 15 to 19 years old (the reference category) respectively. The risk of getting early marriage was 1.1 times higher in households with five and more family members than households with less than five family members, since the estimated hazard ratio for five and more household family members was 1.1(95% CI: 1.01-1.12), compared to less than five household family members. Discussion The main aim of this study was to identify the determinant of infant mortality in Ethiopia, using 2016 EDHS. The analyses revealed that the educational level of women, region, place of residence, religion, age at first sexual intercourse, and family members in the 9 household had statistical significant to age at first marriage. The study also found that the median age at first marriage in Ethiopia was 17 years old. The finding of this study found that the risk of getting early marriage was lower among women with secondary and above educational level than women with non-educated. The result of this findings could be the educated women can identify the consequences of early marriage. They would have the plan to learn, to strength economically, socially and they make strong them in different aspects before marriage. A study (8) conducted in Nigeria suggested that the higher the level of education of the women, the lower the hazard of early marriage. A study (9) conducted in Bangladesh found that the higher education of the respondents is likely to be associated with lower probability of early marriage. A study (10) conducted in Kenya also found that the education had a statistically significant effect on the timing of marriage; more educated women were less likely to marry early. The finding of this study revealed that early marriage was higher in all regions of Ethiopia than Addis Ababa. Provinces may have different levels of socioeconomic development and may be culturally different, which may lead to differences in age at first marriage. Besides, the tradition, awareness about the side effect of early marriage, poor implementation of policies and programs, less civilization of the people, infrastructure and others, all these cumulative effects would leads the highest early marriage in the regions of Ethiopia out of Addis Ababa. A study (12) conducted in Mozambique found that ethnicity is an important predictor of age at first marriage. The finding of this study revealed that getting age at first marriage was higher among women who reside in rural than urban area. Rural areas tend to have institutional and normative structures such as the kinship and extended family that promote early marriage and childbearing. A study (8) conducted in Nigeria found that women who are lived in rural area had a higher risk of first marriage than urban area i.e., hazard of women living in rural is greater than urban. A study (9) conducted in Bangladesh found that the age at first marriage of the study rural women was little higher. A study (11) in Kenya suggested that, delayed age at first sexual debut among women in urban areas significantly reduces the risk of entering into first marriage. This study found that Muslim women were more pronounced for early marriage than Orthodox. The guidance of getting age at first marriage might be different based on their thoughts among different religions. A study (8) conducted in Nigeria supported that the hazard of early marriage was higher among Muslim women than Christians. The study of this finding revealed that the risk of getting first marriage for women who had first sex with less than 15 years old was higher than who had first sex in between 15 to 19 years old and the risk of getting first marriage for women who had first sex above 20 years old was lower than women who had first sex in between 15 to 19 years old. This could be when women who had first sex under 15 years they would expose to unsafe sex, pregnancy and others. Also, they couldn't have the right to decided when they marry, since they depend with their families. Because of these and other reasons they would have enforced to marry early, whereas women who had first sex above 20 years old, they can to convince when they to marry and they can to challenge early marriage. A study (8) conducted in Kenya showed that age at first sexual intercourse was inversely related to age at first marriage. This means an increase in age at first sexual intercourse would lead to reduction in age at first marriage. A study (11) conducted in Kenya also found that women in urban areas who engaged in first sexual intercourse below ages15 years were more likely to enter into marriage than those between ages 15-19 years and 20 years. The study of this finding revealed that the risk of getting early marriage was higher among household with high family members than household with low family members. The result of this study could be when the household be with high family members they would to expose for economic crisis, then this would facilitate early marriage. A study (8) conducted in Nigeria found that the total child ever born was negatively related to age at first marriage. Conclusions The study demonstrated that women's educational level, region, place of residence being rural, religion, age at first sexual intercourse, and family members in the household had statistically significant impact on age at first marriage. Also, it demonstrated that the more than 44% of women married early in Ethiopia. More over the median survival time is 17 years old.
2019-09-26T09:01:50.863Z
2019-09-24T00:00:00.000
{ "year": 2019, "sha1": "70778b238b754ddbf7bab16185d56b1e21731273", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-5573/v1.pdf?c=1631841865000", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "f3813391527bab722164bb82334a0534dda68b60", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [] }
227257712
pes2o/s2orc
v3-fos-license
A Light-Weight Text Summarization System for Fast Access to Medical Evidence As the volume of published medical research continues to grow rapidly, staying up-to-date with the best-available research evidence regarding specific topics is becoming an increasingly challenging problem for medical experts and researchers. The current COVID19 pandemic is a good example of a topic on which research evidence is rapidly evolving. Automatic query-focused text summarization approaches may help researchers to swiftly review research evidence by presenting salient and query-relevant information from newly-published articles in a condensed manner. Typical medical text summarization approaches require domain knowledge, and the performances of such systems rely on resource-heavy medical domain-specific knowledge sources and pre-processing methods (e.g., text classification) for deriving semantic information. Consequently, these systems are often difficult to speedily customize, extend, or deploy in low-resource settings, and they are often operationally slow. In this paper, we propose a fast and simple extractive summarization approach that can be easily deployed and run, and may thus aid medical experts and researchers obtain fast access to the latest research evidence. At runtime, our system utilizes similarity measurements derived from pre-trained medical domain-specific word embeddings in addition to simple features, rather than computationally-expensive pre-processing and resource-heavy knowledge bases. Automatic evaluation using ROUGE—a summary evaluation tool—on a public dataset for evidence-based medicine shows that our system's performance, despite the simple implementation, is statistically comparable with the state-of-the-art. Extrinsic manual evaluation based on recently-released COVID19 articles demonstrates that the summarizer performance is close to human agreement, which is generally low, for extractive summarization. INTRODUCTION The overarching objective of evidence-based medicine practice is to actively incorporate the best available and most reliable scientific evidence into clinical practice guidelines and decision-making (1). The movement associated with the establishment of evidence-based medicine practice has led to the development of evidence hierarchies for medical research, establishment of clinical practice guidelines, and recognition of the importance of patient-oriented evidence (2,3). Since the inception of the modern concept of evidence-based medicine, medical practitioners have been advised to combine their clinical expertise and understanding of patients' priorities with the latest scientific evidence (4)(5)(6). Early and recent studies have extensively discussed the problem of information overload that many practitioners face, particularly in clinical settings, due to the massive amounts of research evidence that is available and the continuous growth of such evidence (7). Searching through medical evidence regarding a specific topic is time-consuming, and practitioners often consider the task to be unproductive and futile (8)(9)(10). PubMed 1 , which indexes over 30 million articles, typically returns multiple pages of research publications even when the queries are very targeted and specific. Almost two decades ago, Hersh et al. (11) discussed the long time (30 min, on average) that it takes for experienced practitioners to search for evidence, and, particularly at pointof-care, practitioners cannot afford to spend that much time. Over time, with the increasing rate of publication of medical literature, these problems associated with evidence curation have only increased (12). Improved literature searching and fast access to relevant and summarized information can be particularly beneficial for medical students and young practitioners because of their lack of clinical experience, or at times when there is a burst of growth in research evidence on a topic (e.g., the ongoing COVID19 pandemic). Natural language processing (NLP) and information retrieval methods have the potential to aid medical experts and researchers to collect and review the latest and emerging research evidence in an efficient manner. NLP methods can, for example, help experts formulate effective search queries and summarize individual publications. Query-focused text summarization approaches have specifically been explored to aid medical practitioners adhere to evidence-based medicine principles (13)(14)(15). These systems take queries (in natural language or key-terms) as input and generate/extract the query-relevant summaries. In terms of automatic summary quality, the performances of successful approaches designed for the medical domain have relied heavily on domain-specific knowledge sources (16). For example, the pioneering work by Demner-Fushman and Lin (17) incorporated sentence-level knowledge in a supervised classification system trained to detect outcome sentences, which were regarded as summary sentences. Sarker et al. (14) and ShafieiBavani et al. (15) utilized manually annotated summarization datasets to generate extractive and abstractive summaries-both systems relying heavily on the identification of domain-specific generalizations, concepts, and associations. Similarly, Hristovski et al. (18) proposed the use of domainspecific semantic relations for performing question answering for biomedical literature. Building on past research progress, recent studies have proposed end-to-end question-answering systems, which typically contain modules to perform the summarization (12,19). Such systems, however, are generally only suitable for very specific types of queries, and despite their limited scopes, they invariably require the incorporation 1 https://www.ncbi.nlm.nih.gov/pubmed/ (accessed 25 Nov, 2019). of medical domain-specific knowledge sources. The progress of summarization and question-answering research in the medical domain has been relatively sluggish, requiring considerable amounts of research efforts to overcome each of the many hurdles. Further discussion of the chronological progress in this research space is outside the scope of this brief research report, and detailed descriptions of medical domain-independent and domain-specific text summarization systems over the years are available through survey papers (20)(21)(22). Adaptation of summarization systems to a particular domain can be computationally expensive and require large numbers of external tools (23). Within the medical domain, systems typically attempt to incorporate domain knowledge based on the Unified Medical Language System via software such as MetaMap (24), which can tag lexical representations of medical concepts. This is in turn used in downstream tasks, or as features in learning systems. Heavy dependence on these domain-specific systems introduces disadvantages, some of which are as follows: (i) the systems are not very portable or generalizable, and are only suitable for the very specific tasks they were initially designed and evaluated for; (ii) they are difficult to re-implement and/or deploy without the domain-specific knowledge sources or ontologies; and (iii) they are computationally slow, often un-parallelizable. The goal of our work is to design a resource-light and fast medical text summarization system that is decoupled from domainspecific knowledge sources. This work is an extension of our years of past research on this topic, focusing specifically on operational and deployment simplicity. The proposed system is extractive and query-focused in design. It relies on publicly available labeled data, which is used for weight optimization, unlabeled data-specifically, dense word embeddings learned from the unlabeled data-and a set of simple features that require little computational resources and time. In the development and evaluation processes, we selectively added and removed modules based on their performance and resource requirements. Comparative evaluation of our system against a state-of-theart system on a standard dataset showed that it is capable of generating summaries of comparable qualities, despite its simplistic design. METHODS The primary dataset for this research is a corpus specifically for NLP research to support evidence-based medicine, created by Molla-Aliod et al. (25) with the involvement of the first author of this paper. The specialized corpus contains a total of 456 queries along with expert-authored single-and multi-document evidence-based summarized responses to them. Each query is generally associated with multiple single-document summaries, which present evidence from distinct studies. The abstracts of the studies from which the answers were derived are made available from PubMed. In total, the corpus contains 2,707 single-document summaries. To ensure fair comparisons, we used the exact train-test split from past research (14)−1,388 for training and 1,319 for evaluation. The system we compare against is very reliant on domain-specific NLP resources, and it had produced state-of-the-art performance on the described corpus. The second dataset is much smaller, and we had prepared it to manually evaluate the performance of the summarization system. This dataset consists of a small set of articles describing research potentially relevant to COVID19. For each of these included articles, we manually created extractive summaries in response to a standard query, and we compared the agreement between our system and the manual summaries. For developing and optimizing the system, we used the training set to devise feature scoring methods and learn weights for all the feature scores. Here, the training set does not consist of the exact single-document summaries, which are abstractive summaries authored by human experts. Instead, the training set consists of three-sentence extractive summaries for each document so that the gold standard is consistent with the expected output of our summarizer. These threesentence extractive summaries of the training set are generated by computing the ROUGE-L F 1 -score of all three-sentence combinations against the human summary, and selecting the topscoring sentence combination for each text. We chose three as our target number of sentences based on past research (14,17). During the summary generation process, each sentence from the full set of candidate summary sentences receives a score for each feature included in the summarization system. All candidate sentences are then scored as the sum of the weighted feature scores, and the three sentences with the highest scores are extracted as the summary. The scoring process takes into consideration the target sentence position, the sentence length and the contents of the selected sentences. In the final summary, the selected sentences are presented sequentially (from first to last). The scoring process can be summarized as: where ζ m,t n is the score for sentence number m of a text, given the summary target sentence number t n , and w i,m,n and f i,m,n are the weight and score for feature i, respectively. For each summary sentence position (t n ), the top-scoring sentence is chosen. To explore and discover a set of simple but salient features, we started with the full set of features used by the QSpec system and removed modules or features with the highest dependencies and longest running times. For example, one important derived feature in the QSpec system is a sentencelevel score based on the sentence type, the UMLS semantic types present in the sentence, and the associations between semantic types. Identifying the sentence type requires the execution of an automatic classifier at run-time (26), identifying UMLS semantic types and associations requires the execution of MetaMap (24), and once these processes are completed, an exhaustive conceptlevel search is performed to find and score the sentence based on the presence of each association. Due to the computational complexity of this module, and its dependence on external tools, we removed this feature first and attempted to optimize performance using the other features-those attempted in the past and those we added. In addition to the features used for the QSpec system, we evaluated a number of features such as variants of edit-distance-based lexical similarities and scores based on the presence of possible statistical testing information (e.g., p-values). These features did not contribute to meaningfully improve the overall system score, and they were also eventually excluded. Following experimentation with multiple feature combinations, we selected five that could be computed fast and proved to be useful when used in combination. We describe these features in the following paragraphs. Word Embedding-Based Maximal Marginal Relevance Maximal Marginal Relevance (MMR) (27) is a strategy that can be used to increase relevance and reduce redundancy, and variants of it have been popular for text summarization (28)(29)(30)(31). In our approach, we compute two similarity measures-between sentences and the associated query, and between the sentences themselves. During score generation, sentences are rewarded for being similar to the query, while at the same time they are penalized for being similar to sentences that have already been selected to be included in the summary. The similarity values are combined linearly with suitable weights (λ): where SIM(S m , Q) is the similarity score between a sentence and the query and max S c ǫS sel (SIM(S m , S c )) is the maximum similarity between the same sentence and the set of already-selected summary sentences. Choosing the best three-sentence summary is a combinatorial optimization problem, and MMR enables us to approach sentence selection in a sequential manner. Despite the widespread use of MMR for extractive summarization, two variants of this score that we use in this system, which rely on distributed representations of the words in the sentences and the queries, had not been proposed in the past, to the best of our knowledge. We obtained pre-trained embeddings that were generated from all PubMed and PubMed Central (PMC) Open Access texts (32) using the word2vec tool 2 (vector size = 200, window size = 5) and the skip-gram model (33). For the first variant, we compute the similarity between two text segments (i.e., sentence vs. query and sentence vs. sentence) as the average cosine similarity of all the terms. We compute this average by adding the cosine similarities of all the term combinations and dividing by the product of the lengths of the two texts. For the second variant, we use the word vectors in a text segment to compute its centroid in vector space. A single centroid is computed for the set of all words within the set of already-chosen sentences (S sel ). These centroids are then used to compute MMR. Traditional MMR Score For the traditional MMR score, the third variant used in the system, we first pre-processed the terms by lowercasing, stemming and removing stop words. We then computed the tf × isf for each word in a sentence and the query-where tf is the frequency of a term in a text segment and isf is the inverse sentence frequency of the term in all the texts (i.e., the inverse of how many sentences, including the query, contain the term). We then generated vectors for each sentence using the tf ×isf values of the terms. Sentence Length Score Sentence length is a metric that may filter out uninformative, short sentences by assigning them a lower score, while rewarding sentences that are relatively longer in a document. In summarization tasks where the character lengths of the summaries are limited, longer sentences may also be penalized (34,35). We attempted to assign penalties to very short sentences (e.g., 1−3-word sentences), which often represent section headers. At the same time, our goal was to assign higher scores to longer sentences-with decreasing gradients for very long sentences, such that this score does not play a significant role in choosing between those informative sentences. Our experiments on the training set suggested that a sinusoidal function conveniently served this purpose. The average sentence length in the training data is ∼150 characters, so we considered 0 and 300 characters to be the lower and upper length limits, respectively, and mapped the lengths to the range −π 2 , π 2 . Following that, we applied a sin function to the mapped value to generate a length score between (−1, 1). Figure 1 illustrates how a sin function enables us to reward/penalize sentences based on their lengths relative to the average sentence length. Both reward and penalty start to level off as length approaches 0 or 300. Sentence Position Score Our last score is based on sentence position and the target sentence number. Sentence position has been shown to be a FIGURE 1 | Clockwise from top-left: sine function for sentence length score, maxed at 300 characters; first, middle, and last sentence relative position distributions from the best-scoring extractive training set summaries. Frontiers in Digital Health | www.frontiersin.org crucial metric for extractive summarization in domains including news (36) and medical (17). We used an approach identical to our past work as it had proven to be computationally fast and effective (14). The approach, which we called target sentence specific summarization, generates different scores for the same source sentence based on the summary sentence number. This means that the same sentence gets a different score when the system is searching for the first sentence for a three-sentence summary compared to when the system is searching for the last sentence. This ensures that the eventual summary extract is not biased to a specific region of the source text, which is often the case with traditional systems that apply the same scoring mechanism for all text spans. Generally speaking, when the system is scoring sentences for the first summary sentence, it gives preference to sentences occurring early in the source documents, which often contain important background information, compared to sentences occurring later, which tend to contain information about the final outcome of the study. To compute this score, we first obtained the best threesentence summary (gold standard summary) for each training text, and used these sentences to generate normalized frequency distributions of the relative sentence positions. These distributions are shown in Figure 1. During summary generation, given the relative sentence position r of a source sentence, the score assigned is the normalized frequency for r in the given target sentence distribution. Weight Optimization and Intrinsic Evaluation We computed near-optimal weights for scoring using the training set via a grid search in the range (0.0, 1.0). For each weight combination, all the three-sentence training set summaries were generated and the weights producing the highest F 1 -score were used for evaluation on the test set. The ROUGE summary evaluation tool (37) was used to compare the extractive summaries with the expert-authored summaries in the corpus. The ROUGE-L variant of the evaluation tool attempts to score summaries based on their longest common subsequences (LCS) (38). Given two texts-the automatic summary of length m words and the corresponding gold-standard summary of length n words-the F 1 -score is computed as: where R = LCS(summary, goldstadard)/m; P = LCS(summary, goldstandard)/n and LCS(summary, goldstandard) is the length of the longest common subsequence between the summary and the gold standard. β 2 is set to 1. ROUGE scores had been shown to be correlated with human evaluators and the ROUGE-L F 1score is the harmonic mean of the ROUGE-L recall and precision scores. In past research, the evaluations of many summarization systems were based on summaries constrained by character-level maximum lengths (e.g., 100 characters), and such evaluations typically used ROUGE recall scores for comparison. In our case, the summaries are constrained by the number of sentences (three), and so, optimizing and evaluating based on recall would overfit the system in favor of longer sentences. Therefore, we chose to use the F 1 -score, rather than recall, and we computed them using the original human-authored summaries as the gold standard. Extrinsic Evaluation on COVID19 Literature We conducted a brief extrinsic evaluation of the system using a small number of recently-published articles about COVID19 or related research (39). We created six categories of queries focusing on different types of COVID19-related information (e.g., treatment and transmission). To establish these categories, we selected 2 from 12 categories that had been proposed in the literature (40) and added 4 additional ones. Two of our query categories (treatment and prognosis) overlapped with the categories proposed in the past, and we added 4 more categories based on their relevance to COVID19 and our research interests. Note that our intent was not to determine a comprehensive set of categories relevant to COVID19. The queries, their types and their numbers are shown in Table 1. For a set of 11 articles we manually created 3-sentence summaries. The four authors independently created the three-sentence summary for each article. We modeled the sentence selection task as a binary sentence labeling task and compared the pair-wise agreements between the annotators using Cohen's kappa (41). We ran the summarizer with the best performing weights on a large amount of COVID19 literature that has been made available since the outbreak of the pandemic. For 11 of these articles, which were manually annotated, we compared the agreements between the three-sentence human summaries, and between the system and human summaries. We also compared the agreement between the human summaries and summaries generated by the QSpec system. Two authors were kept unaware of the internal scoring strategy of the system to ensure that sentence selection is not biased by that knowledge. Note that the articles themselves were pre-selected, and were not based on the queries, since information retrieval is not an objective of our research. Table 2 presents the performance of our system along with several other systems. Identical training-test splits were used for evaluation. Our proposed approach obtains a score of 0.166, 0.002 lower than the best-performing system. The table shows that despite the simplicity of our approach, its performance is comparable to the state-of-the-art, and significantly better than other baselines. To compare our approach with the extractive summarization method proposed by Demner-Fushman and Lin (17), we used an automatic classifier to detect Outcome sentences (42); the last three outcome sentences were extracted as the summary. Using the ROUGE score distribution of all summary combinations, we computed the percentile rank of our summarizer's performance via the method described by Ceylan et al. (43). In the proposed approach, a probability density function is generated using an exhaustive search of all ROUGE score combinations for extractive summaries, and this distribution is used to find the percentile rank of a system's ROUGE score. Our light-weight system's score has a percentile rank of 94.3 compared to QSpec's rank of 96.8. The large difference in percentile rank despite the small change in the ROUGE score is caused by the typical long-tailed nature of the ROUGE score distribution. Extrinsic Summary Evaluation Pair-wise agreement, based on Cohen's kappa, was generally low for both sets of agreements (i.e., human-human and human-system). DISCUSSION AND CONCLUSION Using a set of similarity-based and structural features, our system performs comparably to the state-of-the-art system, with a ROUGE-L F 1 -score of 0.166. Our extrinsic evaluations showed that for this extractive summarization task, human-tohuman inter-annotator agreement was low, resulting in a low ceiling for the automatic summarizer. We observed consistently low agreements across subsets of annotators, illustrating that choosing the optimal n-sentence query-focused summary is a difficult task for humans. Abstractive summaries could perhaps be more suitable for humans as more information can be summarized within a short text span. However, from the perspective of automatic summarization, moving from extractive to abstractive summarization has been challenging for this particular research community, and our scope was limited to extractive summarization. Although our evaluation was brief and differences between automatic and human summaries were not conclusive, we did observe more disagreements for earlier summary sentence selections compared to the selection of later sentences. Generally speaking, we found the gold standard summaries to have higher variance in relative sentence positions compared to automatically generated summaries. Figure 2 further illustrates the differences between the gold standard extracts and the automatic summarization systems by visualizing the distributions of the relative positions of the sentences included in the summary. While human summaries almost invariably contain sentences from the end of the texts, they also tend to contain sentences from different relative positions. However, QSpec and our proposed system tend to select most sentences from the end and some from the beginning, but few from the rest of the document. Our specific focus for this summarization system was to make the sentence selection process simple and decoupled from multiple additional systems or processes while also maintaining high performance. Our focus on simplicity is particularly from the perspective of deploy-ability (i.e., how quickly can the source for the system be downloaded and executed on a new machine?). Past systems focusing on the task of evidence-based medicine text summarization have relied on multiple knowledgeencapsulating software sources such as MetaMap, and parallel processes such as query and sentence classification, Compared to the resource-heavy QSpec system, which requires query and sentence classification and the generation of UMLS semantic types/associations, our current approach requires minimal preprocessing. Only a set of pre-trained word embeddings are required. The light-weight nature of the summarizer also means that it runs faster than QSpec. On a standard computer (Intel R i5 2.0 GHz processor), it takes our summarizer a few minutes to summarize all the documents in the test set. Due to the simplicity of our approach, we believe that it can be easily re-implemented, customized or extended for real-life settings, and the results can be reproduced without requiring the use of third-party tools. It is possible for non-NLP experts or even non-programmers to use the summarization system without having to set up additional tools; the only resource needed is any publicly available pretrained word/phrase embedding model. From an application perspective, we believe that this summarization approach is more easily transferable to other data sets, even those in other languages that do not have domain knowledge encoded in thesauruses. Exploring the applicability of this approach on non-English datasets is part of our future research plans. We are particularly interested in assessing the performance of this system, compared to those reliant on domain-specific knowledge sources, on other languages without including a language-specific gold standard or manually-curated knowledge sources. Our hypothesis is that this light-weight summarizer will outperform resource-heavy systems such as QSpec on such data sets. We obtained the word embeddings from past research and used them without modification. There is a possibility that the learning of these embeddings can be customized to the summarization task for improving performance (e.g., using a COVID19-specific embedding model for the second summarization task). This is a notable limitation of the systemthe semantics of emerging health topics, such as COVID19, may not be captured by the underlying embedding model, thus, leading to sub-optimal performance. Another operational limitation may be the size of the embedding model. While our focus is on a light-weight system that can be run on not-so-powerful computers, embedding models can be large in size (multiple gigabytes), which may act as an obstacle for old machines. To address these limitations, in future research, we plan to implement a continuously-learning embedding model that updates periodically using text from recentlypublished papers, and strategies for building targeted embedding models that require less unlabeled data and memory at run time. DATA AVAILABILITY STATEMENT Publicly available datasets were analyzed in this study. This data can be found here: https://sourceforge.net/ projects/ebmsumcorpus/ and https://sarkerlab.org/lw_ summ/. AUTHOR CONTRIBUTIONS AS implemented the initial system and evaluated. AA, MA-G, and Y-CY assisted with the experiments, evaluations, and in preparing the manuscript. All authors contributed to the article and approved the submitted version. FUNDING This research was conducted by generous funding provided by the School of Medicine, Emory University.
2020-12-04T14:08:14.499Z
2020-12-04T00:00:00.000
{ "year": 2020, "sha1": "c2dd9c9232feaf200fdcad3a18bf95613de443b1", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fdgth.2020.585559/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c2dd9c9232feaf200fdcad3a18bf95613de443b1", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
237939577
pes2o/s2orc
v3-fos-license
Peculiarities of Aluminum Anodization in AHAs-Based Electrolytes: Case Study of the Anodization in Glycolic Acid Solution The anodization of aluminum (Al) in three alpha-hydroxy acids (AHAs): glycolic (GC), malic (MC), and citric (CC), was analyzed. Highly ordered pores in GC were obtained for the first time. However, the hexagonal cells were characterized by a non-uniform size distribution. Although common features of current density behavior are visible, the anodization in AHAs demonstrates some peculiarities. The electric conductivity (σ) of 0.5 M GC, MC, and CC electrolytes was in the following order: σ(CC) > σ(MC) > σ(GC), in accordance with the acid strength pKa(CC) < pKa(MC) < pKa(GC). However, the anodization voltage, under which a self-organized pore formation in anodic alumina (AAO) was observed (Umax), decreased with increasing pKa: Umax(CC) > Umax(MC) ≥ Umax(GC). This unusual behavior is most probably linked with the facility of acid ions to complex Al and the active participation of the Al complexes in the AAO formation. Depending on the AHA, its tendency and different modes to coordinate Al ions, the contribution of stable Al complexes to the AAO growth is different. It can be concluded that the structure of Al complexes, their molecular mass, and the ability to lose electrons play more important roles in the AAO formation than pKa values of AHAs. Introduction The anodization of aluminum is one of the most studied electrochemical processes owing to its ability to produce a regular porous structure with tunable pore geometry [1,2]. The anodic aluminum oxide (AAO) resulted from the process can be applied as a membrane for chemical separation [3], in sensors [4], capacitors [5], high-density magnetic recording media [6], etc., or can serve as a template to produce other nanostructured materials with desired morphology and properties [7,8]. Generally, regular hexagonal pore arrays can be obtained under high-current-density conditions, which are mostly defined by anodization potential, temperature, and electrolyte concentration [9,10]. The application of a low electrolyte temperature and relatively high electrolyte concentration will make the high-current-density anodization proceed without burning [11]. Moreover, the hexagonal arrangement of pores is usually formed under anodizing voltage (U max ), which is close, but not greater than the so-called critical voltage (U c ), above which a dielectric breakdown occurs (U max < U c ) [12,13]. In other words, high electrolyte concentration, low anodizing temperature, and the applied voltage close to U c (U max ) will favor the pores to organize into the hexagonal close-packed structures (self-ordering regime). The U max , in turn, regulates the interpore distance (D c ) in AAO. It was observed that the D c is linearly proportional to the anodizing potential, with proportionality constants of about 2.5 nm/V for mild anodization (MA) conditions [14,15]. The higher the applied voltage, the larger the D c can be obtained. On the other hand, it is quite well established that, for where the i a max happened after a few minutes of anodization [39,40]. Kikuchi et al. [43] have demonstrated that the pore nucleation in malic acid initiates at grain boundaries of the aluminum. Before reaching the i a max , the pores formed islands separated by flat regions where no concaves on the Al surface were present. The concaves were spread over the entire Al substrate only after the i a max peak, which was accompanied by a slow decrease of the current [43]. Glycolic acid's pK a1 (C 2 H 4 O 3 ) = 3.8 at 25 • C [23], and therefore, it can be anticipated that a stable, self-organized growth of AAO will be possible under a U max higher than that applied during anodization in citric and malic electrolytes. The electrochemical oxidation of Al foil in glycolic acid was performed by Chu et al. [42]. The process was carried out in 1 wt% solution, at 150 V and 283 K, thus far away from the conditions where a self-ordering regime should be expected. Moreover, no detailed analysis of the process in GC was presented. The results so far obtained suggest that the anodization in AHAs electrolytes needs deeper studies, especially in terms of the significance of pK a and its influence on a proper selection of anodizing parameters. In this work, Al anodization in GC, MC, and CC electrolytes performed within selfordering regimes is discussed and compared. The anodization in GC solution under self-organized conditions was accomplished for the first time. Highly ordered pores were formed in 0.5 M GC, at 225-250 V and 5 • C. However, the hexagonal cells on the Al substrate, obtained after the dissolution of the formed oxide, were characterized by a non-uniform size distribution. Anodization at U max > 250 V was characterized by an extremely high current generated during the process and a fast consumption of Al substrate. Moreover, it was shown that the U max applied during the anodization in the AHAs decreases with the increasing pK a of the acids. This unusual behavior was discussed, taking into consideration the possible participation of ionic species in AAO formation and their strong ability to form stable complexes with Al. Materials and Methods High-purity Al foil (99.9995% Al, Goodfellow, UK) with a thickness of about 0.25 mm was cut into rectangular specimens (2 × 1 cm). Before the anodization process, the Al foils were degreased in acetone and ethanol and subsequently electropolished in a 1:4 mixture of 60% HClO 4 and ethanol at 0 • C, under a constant voltage of 25 V, for 2.5 min. Next, the samples were rinsed with distilled water, ethanol, and dried. The as-prepared Al specimens were insulated at the back and the edges with acid-resistant tape and served as the anode. A Pt grid was used as a cathode, and the distance between both electrodes was kept constant (ca. 5 cm). The Pt/Al electrode area ratio was about 25. High purity citric acid was purchased from Chempur (PiekaryŚląskie, Poland). Glycolic acid for synthesis was purchased from Sigma-Aldrich (Darmstadt, Germany). A large, 1L electrochemical cell and cooling bath thermostat (model MPC-K6, Huber company, Offenburg, Germany) were employed in the anodizing process. An adjustable DC power supply with a voltage range of 0-300 V and current range of 0-5 A, purchased from NDN, model GEN750_1500 TDK Lambda, TDK Co. Tokyo, Japan, was used to control the applied voltage. A RIGOL DM 3058E digital multimeter (Beaverton, OR, USA) was used to measure and transfer the registered current to a computer. Alumina was chemically removed using a mixture of 6 wt% phosphoric acid and 1.8 wt% chromic acid at 60 • C for 120 min. Morphological analysis was made using a field-emission scanning electron microscope FE-SEM (AMETEK, Inc., Montvale, NJ, USA). The layer thickness of AAO was determined from three measurements taken at different areas in the secondary electrons (SE) image of a cross-sectional view of AAO. Finally, an average of the three measurements was given as a result. To obtain the interpore distance (D c ) of the fabricated samples, fast Fourier transforms (FFTs) were generated based on three SEM images taken at the same magnification for every sample and were further used in calculations with WSxM software (version 5.0) [45]. The average D c was estimated as an inverse of the FFT's radial average abscissa from three FE-SEM images for each sample. The conductivity of the electrolytes was measured in a thermostatic cell with Elmetron CC 505 conductivity meter, Zabrze, Poland. As a result, an average value from three measurements is provided. Results and Discussion Anodization of aluminum was conducted in 0.5 M GC water solution at a temperature of 5 • C. In Figure 1a,b, the current density (i a ) vs. time (t) curves are shown for different values of anodizing voltage. At U > 250 V, the extremely high i a made it impossible to perform the process for a longer time owing to a fast consumption of Al substrate. Therefore, the anodization at 300 and 275 V were stopped after 8 and 20 min, respectively. As the applied voltage decreased, the currents became less violent (i a significantly decreased), and at U = 200 V, it was possible to conduct a stable anodization for more than 4 h. The current evolution is similar to that observed during Al anodization in citric acid electrolytes [39,40]. As in CC electrolytes, the i a (t) curves first increase to high current peaks, followed by their decrease to a minimum, and then slowly grow to a maximal value (i a max ). After reaching the i a max , the currents continuously decrease to a steady-state value. The turnover points were associated previously with various stages of pore nucleation and growth [39,40]. The minimum (marked by small arrows in Figure 1b) appeared later as the applied voltage decreased, giving an indication that the pore development was delayed owing to a smaller external electric force operating under a given electrochemical condition. magnification for every sample and were further used in calculations with WSxM software (version 5.0) [45]. The average Dc was estimated as an inverse of the FFT's radial average abscissa from three FE-SEM images for each sample. The conductivity of the electrolytes was measured in a thermostatic cell with Elmetron CC 505 conductivity meter, Zabrze, Poland. As a result, an average value from three measurements is provided. Results and Discussion Anodization of aluminum was conducted in 0.5 M GC water solution at a temperature of 5 °C. In Figure 1a,b, the current density (ia) vs. time (t) curves are shown for different values of anodizing voltage. At U > 250 V, the extremely high ia made it impossible to perform the process for a longer time owing to a fast consumption of Al substrate. Therefore, the anodization at 300 and 275 V were stopped after 8 and 20 min, respectively. As the applied voltage decreased, the currents became less violent (ia significantly decreased), and at U = 200 V, it was possible to conduct a stable anodization for more than 4 h. The current evolution is similar to that observed during Al anodization in citric acid electrolytes [39,40]. As in CC electrolytes, the ia(t) curves first increase to high current peaks, followed by their decrease to a minimum, and then slowly grow to a maximal value (ia max ). After reaching the ia max , the currents continuously decrease to a steady-state value. The turnover points were associated previously with various stages of pore nucleation and growth [39,40]. The minimum (marked by small arrows in Figure 1b) appeared later as the applied voltage decreased, giving an indication that the pore development was delayed owing to a smaller external electric force operating under a given electrochemical condition. The pits formed on Al substrate after the AAO dissolution, being an imprint of the pores' bottom in AAO, are shown in Figure 2. The pores organize into a few areas containing cells of various sizes. SEM images in Figure 2 show a larger region of a given sample (first column) and a magnification of the areas with two extreme cell sizes, designated as I (second column) and II (third column). In Table 1, the interpore distance (Dc) values that correspond to the center-to-center distance between neighboring cells for both I and II areas are gathered. It is observed that after anodization at 300 and 275 V (the extremely high currents), only the pores from area I exhibit the features of hexagonal ordering typical for the AAO matrix. Area II demonstrates a rather poor pore organization. At 250 V, pores in both areas are organized into a close-packed hexagonal structure. Starting from The pits formed on Al substrate after the AAO dissolution, being an imprint of the pores' bottom in AAO, are shown in Figure 2. The pores organize into a few areas containing cells of various sizes. SEM images in Figure 2 show a larger region of a given sample (first column) and a magnification of the areas with two extreme cell sizes, designated as I (second column) and II (third column). In Table 1, the interpore distance (D c ) values that correspond to the center-to-center distance between neighboring cells for both I and II areas are gathered. It is observed that after anodization at 300 and 275 V (the extremely high currents), only the pores from area I exhibit the features of hexagonal ordering typical for the AAO matrix. Area II demonstrates a rather poor pore organization. At 250 V, pores in both areas are organized into a close-packed hexagonal structure. Starting from 225 V, the pore arrangement seems to be weakened in area I, whereas it is still very good in area II. The interpore distance (D c ) in both areas tends to decrease with anodization voltage (Table 1). Moreover, as the applied voltage decreases, the D c in area II becomes successively closer to that obtained during anodization in 0.3 M oxalic acid solution at voltages 40-60 V [15,21]. 225 V, the pore arrangement seems to be weakened in area I, whereas it is still very good in area II. The interpore distance (Dc) in both areas tends to decrease with anodization voltage (Table 1). Moreover, as the applied voltage decreases, the Dc in area II becomes successively closer to that obtained during anodization in 0.3 M oxalic acid solution at voltages 40-60 V [15,21]. It is worth noticing that the barrier layer is not interrupted by the extremely high currents generated during the process: hexagonally arranged corrugations are present even on Al substrates produced at 275 and 300 V. The i a (t) curves for all anodized samples show a similar evolution. After reaching the i a max , the current decreases and stabilizes at a certain value (the greater for larger applied voltage), with no characteristic sudden, rapid, and continuous rise, typical for dielectric breakdown [43,44]. This behavior was also observed by Ma et al. [39,40] during anodization in citric acid under high voltage and concentration. Instead of the dielectric breakdown, the high current density anodization was accompanied by a continuous improvement of pore arrangement and massive incorporation of citric anions (formation of black oxides). Pore evolution at various stages of anodization was studied at 250 V. The morphology of the Al substrates was analyzed after 100, 500, 1500, and 3000 s of anodization ( Figure 3a) and subsequent removal of AAO (the pore arrangement on the Al substrate after 4500 s of anodization are shown in Figure 2). As can be seen, in the process conducted in 0.5 M GC electrolyte, the pores are already formed after 100 s (the first minimum in the i a (t) curve). However, their organization is rather poor in both areas I and II. The peak that appears at 500 s of the process (i a max ) signals the commencement of pore organization into the hexagonal arrays. As the process proceeds, the hexagonal arrangement becomes better and better. The pore evolution is generally the same as the one observed during anodization of Al in CC. Nevertheless, owing to much lower current densities reached during the process conducted in the CC electrolyte, the nanodents on aluminum were still unregularly arranged after reaching the i a max [39,40]. This suggests that the mechanism of the AAO formation is very similar in both electrolytes. Cross-sectional views of the AAO grown at different anodization stages are shown in Figure 4. As can be observed, the growth of AAO is very rapid, especially at the beginning of the process and slows down after approximately 2000 s. This is an effect of the extended diffusion path along the pores (diffusion-limitation) as the thickness of AAO exceeds 100 µm. The same phenomenon was observed during the hard anodization in oxalic acid [46]. The thickness of the resulted AAO is not uniform along the entire AAO cross-section, and this heterogeneity increases with anodization time (the graph in Figure 4 shows an average of three film thickness values that were measured at three different locations along a given AAO cross-section). Furthermore, it can be seen that the continuity of the top part in AAO becomes disrupted as the anodization proceeds. After 100 s the top layer is still smooth without cracks ( Figure 4). However, after reaching the current peak at 500 s, when the reorganization of pores into various domains occurs, the surface of the AAO membrane begins to crush and delaminate. This effect happens because of the extremely rapid and inhomogeneous growth of AAO and huge stresses generated throughout the whole film. After 1 h of anodization, the AAO thickness is already ca. 142 µm. The thickness of self-organized AAO that was obtained in various electrolytes and electrochemical conditions are presented in Table 2. From the analysis, it appears that despite its highest pK a1 value (the acids in Table 2 are arranged according to the increasing pK a1 value) the process conducted in GC is one of the most violent. The AAO growth rate is lower even during HA in oxalic acid solution. The process is also the fastest among the ones performed in other AHAs electrolytes: the AAO thickness is about 50 µm after 1-h anodization in 1.5 M CC at 400 V, and~165 µm after 6-h anodization in 0.5 M MC at 250 V. highest pKa1 value (the acids in Table 2 are arranged according to the increasing pKa1 value) the process conducted in GC is one of the most violent. The AAO growth rate is lower even during HA in oxalic acid solution. The process is also the fastest among the ones performed in other AHAs electrolytes: the AAO thickness is about 50 μm after 1-h anodization in 1.5 M CC at 400 V, and ~165 μm after 6-h anodization in 0.5 M MC at 250 V. In our previous work, we have observed a very fast aging of malic acid electrolytes when the anodization was repeated in the same MC solution several times in a row, which was manifested in a visible change in current density vs. time transients and in a continuous decrease of electric conductivity (σ) [44]. A similar experiment was performed in the glycolic acid electrolyte. In Figure 5a, i a (t) curves were recorded during 2.5-h anodization in 0.5 M GC solution, at 250 V and T = 5 • C, five times in a row (the processes no. [1][2][3][4][5], are demonstrated. As can be seen, the current density drops in every subsequent cycle. Similar to the conductivity behavior in the MC electrolyte, the σ is also decreasing as the number of anodization cycles increases (Figure 5b), suggesting a decreasing amount of ionic species in the electrolyte and their transition from the solution to AAO matrix. Moreover, in accordance with the larger dissociation constant of GC over MC (pK a1 (C 2 H 4 O 3 ) = 3.8 > pK a1 (C 4 H 6 O 5 ) = 3.5), the σ(C 2 H 4 O 3 ) < σ(C 4 H 6 O 5 ) (anodization cycle = 0 for GC electrolyte means that the σ was measured for a freshly prepared solution). In Figure 5c, the i a (t) curves for the samples anodized in 0.5 M GC and MC electrolytes, at the same temperature (5 • C) and anodizing voltage (250 V), are shown. As can be seen, despite the lower σ of GC, AAO growth is much more rapid in the GC electrolyte when compared to that in the MC electrolyte: the i a max was reached at about 8 min in GC, whereas in MC, it appears only after ca. 4 h. In Figure 5d, an SEM image of the Al concaves after the anodization no. 5 is shown. It is visible that the concaves do not change with either a twofold increase of the anodization time or with decreasing σ in the successive cycle. As a result, the AAO possesses a complex morphology with close-packed hexagonal cells that are grouped by their different sizes into separate areas. The cells in area I are more than two times larger than the cells in area II (Table 1). Porous membranes built of pores of various sizes are needed to study the size effect of various nanostructures on their functional properties. AAO with a gradient distribution of pore diameter was prepared by specially designed experiments involving nonparallel arrangement between an aluminum sheet and a cathode [52] or by the bipolar electrochemical anodization route [53]. In the first work, a change in interpore distance from 300 to 250 nm was obtained [52]; in the second work, a continuous change in interpore distance from ~171 to ~83 nm over a range of 5 mm on the aluminum sheet was achieved [53]. Although in the AAO produced in the GC electrolyte, there is no continuous change of Dc (the areas of different pore sizes are distributed rather stochastically), it should be noted Porous membranes built of pores of various sizes are needed to study the size effect of various nanostructures on their functional properties. AAO with a gradient distribution of pore diameter was prepared by specially designed experiments involving nonparallel arrangement between an aluminum sheet and a cathode [52] or by the bipolar electrochemical anodization route [53]. In the first work, a change in interpore distance from 300 to 250 nm was obtained [52]; in the second work, a continuous change in interpore distance from 171 to~83 nm over a range of 5 mm on the aluminum sheet was achieved [53]. Although in the AAO produced in the GC electrolyte, there is no continuous change of D c (the areas of different pore sizes are distributed rather stochastically), it should be noted that the various cell sizes were obtained in a single anodization process, without the necessity of using special equipment. Among the three AHAs: citric (CC), malic (MC), and glycolic (GC), the CC is characterized by the lowest pK a . According to the theory that takes the magnitude of the dissociation constant as the basic criterion for determining U max (U max < U c ) [10,13,16,18,42], in this electrolyte, it should not be possible to carry out the anodization under a voltage larger than that applied during anodization in MC and GC electrolytes. Yet, a stable anodization in CC electrolyte was performed under a much higher anodizing voltage (350-400 V), and what is more, in three times larger acid concentration (1.5 M) [39,40]. In this work, the electrochemical conditions to form AAO in CC are systematically analyzed and compared directly with those applied in the GC electrolyte. In Figure 6a, i a (t) curves recorded during anodization in 0.5 M CC solution at 0 and 5 • C, at 300 V, are presented. As can be seen, no sign of pore formation is observed. Therefore, the acid concentration was increased to 1.5 M, resulting in a current increase after some anodizing time at 300 V, which indicated the appearance of the pore formation process. After the anodization, the oxide was dark greenish (the insert in Figure 6b) rather than black, as previously observed after anodization at 400 V and 0 • C [39,40]. In the i a (t) curve, two current maxima (i a max ) of similar intensities are visible. The i a max is, however, much smaller than that recorded during anodization in both MC and GC electrolytes: i a max~3 00 A/m 2 as compared to i a max~1 000 A/m 2 in MC and i a max~3 500 A/m 2 in GC (see Figure 5c). When the anodizing voltage is reduced to 250 V, the i a (t) decreases as well (Figure 6b). [39,40]. In this work, the electrochemical conditions to form AAO in CC are systematically analyzed and compared directly with those applied in the GC electrolyte. In Figure 6a, ia(t) curves recorded during anodization in 0.5 M CC solution at 0 and 5 °C, at 300 V, are presented. As can be seen, no sign of pore formation is observed. Therefore, the acid concentration was increased to 1.5 M, resulting in a current increase after some anodizing time at 300 V, which indicated the appearance of the pore formation process. After the anodization, the oxide was dark greenish (the insert in Figure 6b) rather than black, as previously observed after anodization at 400 V and 0 °C [39,40]. In the ia(t) curve, two current maxima (ia max ) of similar intensities are visible. The ia max is, however, much smaller than that recorded during anodization in both MC and GC electrolytes: ia max ~300 A/m 2 as compared to ia max ~1000 A/m 2 in MC and ia max ~3500 A/m 2 in GC (see Figure 5c). When the anodizing voltage is reduced to 250 V, the ia(t) decreases as well (Figure 6b). Figure 8. At 300 V, the hexagonal concaves on Al are clearly visible. The pore sizes are much more uniform as compared to those produced in the GC electrolyte, although two areas can still be distinguished: one (the area Ia) with almost perfect hexagonal close-packed structure, and the second (the area Ib) with a little bit worse pore arrangement and thus a slightly larger interpore distance. In Table 3, the Dc determined for both areas are presented, together with Dc obtained in other AHA electrolytes at 5 °C. The Dc in AAO produced in CC electrolyte is the largest compared to that reached in the other two AHA electrolytes, mostly due to the highest Figure 8. At 300 V, the hexagonal concaves on Al are clearly visible. The pore sizes are much more uniform as compared to those produced in the GC electrolyte, although two areas can still be distinguished: one (the area Ia) with almost perfect hexagonal close-packed structure, and the second (the area Ib) with a little bit worse pore arrangement and thus a slightly larger interpore distance. In Table 3, the D c determined for both areas are presented, together with D c obtained in other AHA electrolytes at 5 • C. The D c in AAO produced in CC electrolyte is the largest compared to that reached in the other two AHA electrolytes, mostly due to the highest U max . However, generally, the interpore distance in AAO fabricated in AHAs is rather weakly linked with the applied voltage. Despite the same U = 250 V (and all other anodizing parameters) used in GC and MC electrolytes, the D c in AAO grown in MC is much larger than that produced in the GC electrolyte (even if one takes into account the D c values from area I). This effect is most probably associated with the complex architecture of the pores formed in GC solution (a few areas of different cell sizes), which eludes the simple proportionality between D c and U in this electrolyte. On the Al substrate fabricated at lower voltages (275 and 250 V), tiny dimples with no arrangement are visible after the oxide dissolution. This suggests that the conditions were already not sufficient to induce AAO growth with the typical long, hexagonally organized channels. In Table 4, electric conductivity determined for the citric acid electrolytes is gathered. According to Pashchanka et al. [54], the electric conductivity is one of the crucial parameters of electrolyte compositions, with its minimum limit requirement for the self-assembly being around 4-5 mS/cm. This requirement is fulfilled for all AHAs studied in this work. The σ values of the CC electrolyte are the highest among the other AHAs (Figure 5b), as expected based on its lowest pK a . Furthermore, the CC electrolyte seems also to be not so prone to aging when the anodization is repeated in the same solution: the values remain stable after the subsequent anodizing process conducted in 1.5 M CC solution, at 5 • C, and under decreasing anodizing voltage (Table 4). Table 4. Electric conductivity (σ) of citric acid water solutions measured directly after the electrolyte preparation ("fresh" solution) and after the subsequent anodizing processes (arranged sequentially from up to down the Table). In Table 5, the molecular structure, pK a values, electric conductivity, electrolyte concentration, and U max were determined for anodization conducted at 5 • C, where the best pore ordering (close-packed hexagonal structure) was observed, are gathered. As can be seen, the conductivity decreases with the strength of the acid: for pK a (CC) < pK a (MC) < pK a (GC), the σ(CC) > σ(MC) > σ(GC). As was already mentioned, it is frequently assumed that the pK a is the most important parameter that governs anodization (a smaller pK a corresponds to more acid anions, which can obtain a higher incorporation current (j c ) and thus a smaller U max [10,13,16,18,42]) and even a small variation in electrolyte concentration and application of various anodizing temperature do not undermine the leading role of the pK a . However, this assumption seems to not hold for AHAs. Instead of the expected increase of U max with decreasing the acid strength: U max (CC) < U max (MC) < U max (GC), the U max applied during anodization in the AHAs' electrolytes appears in the following order: U max (CC) > U max (MC) ≥ U max (GC) ( Table 5). U = 250 V was the maximum anodizing voltage to obtain regular hexagonal pore arrays in the MC electrolyte. Lowering U resulted in worsening of the pore ordering in AAO in MC [45]. In GC instead, a stable self-organized anodization was performed at U ≤ 250 V. In the CC electrolyte, a 0.5 M concentration was not sufficient for pore nucleation to begin even at 300 V, and a substantially greater concentration had to be used to induce pore self-organization. In Table S1 (Supplementary Materials), the relationship between pK a and U max is analyzed in more detail. Excluding the examples where no self-ordering regimes were found (anodization in acethylenedicarboxylic and squaric acid solutions), the U max increases with pK a of the acid with some exceptions (Table S1). Anodization in phosphonic, etidronic, and phosphonoacetic acid solutions (highlighted in grey in Table S1) actually started at a much lower voltage, which was next increased to the targeted values listed in Table S1. The anodization was then conducted under the target voltages for a predetermined time. The method, although not named hard anodization, was in fact very similar to the approach applied in the HA [46], and therefore, cannot be directly compared with the other processes gathered in Table S1. The lower than expected U max in the case of malonic and tartaric electrolytes can be explained by the one order of magnitude larger acid concentration (5 M and 3 M, respectively) used in this process, which are considerably larger than that commonly used. The large concentration leads to more acid anions and higher ionic currents that induce a lower U max [16,18]. Therefore, the malonic and tartaric acid anodization seems to confirm the rule: the stronger the acid, the lower the U max . However, the selenic acid solutions and particularly the AHA-based electrolytes are out of this trend [40,44,55,56]. These observations give an indication that other parameters (including the molecular mass of the acid species) and phenomena should be taken into consideration when aluminum is electrochemically oxidized in acidic solutions. Table 5. Molecular mass and structure, dissociation constants pK a (acidity at T~25 • C) [23,29], and conductivity of α-hydroxy acids together with the concentration and U max applied during anodization of Al. Name (Molecular Mass (g/mol)) Molecular Structure pK a T~25 • C Concentration (M) U max (V) Citric acid (192.12) conducted under the target voltages for a predetermined time. The method, although not named hard anodization, was in fact very similar to the approach applied in the HA [46], and therefore, cannot be directly compared with the other processes gathered in Table S1. The lower than expected Umax in the case of malonic and tartaric electrolytes can be explained by the one order of magnitude larger acid concentration (5 M and 3 M, respectively) used in this process, which are considerably larger than that commonly used. The large concentration leads to more acid anions and higher ionic currents that induce a lower Umax [16,18]. Therefore, the malonic and tartaric acid anodization seems to confirm the rule: the stronger the acid, the lower the Umax. However, the selenic acid solutions and particularly the AHA-based electrolytes are out of this trend [40,44,55,56]. These observations give an indication that other parameters (including the molecular mass of the acid species) and phenomena should be taken into consideration when aluminum is electrochemically oxidized in acidic solutions. Table 5. Molecular mass and structure, dissociation constants pKa (acidity at T~25 °C) [23,29], and conductivity of α-hydroxy acids together with the concentration and Umax applied during anodization of Al. According to traditional theories of dielectric breakdown, the following relationship holds for the Umax and the breakdown voltage (Uc) [17,18]: Name where E is the electric field across the barrier layer that is often constant, ∼1.0 V/nm; α is the impact ionization coefficient, which is in a reciprocal relationship with the mean free path (λ) of an electron passing a distance of 1 cm (α = 1/λ), and the coefficient α will increase with the E; jo is the oxidation current that corresponds to inward migration of O 2− and leads to the formation of the oxide, jc is the incorporation current that comes from acid anions (jc is considered to be a constant fraction of jo, i.e., jc = γjo). During the anodization, electrolyte species (e.g., acid anions) will migrate to the AAO barrier layer, and as a consequence of the high E, some of them will release primary elec- was next increased to the targeted values listed in Table S1. The anodization was then conducted under the target voltages for a predetermined time. The method, although not named hard anodization, was in fact very similar to the approach applied in the HA [46], and therefore, cannot be directly compared with the other processes gathered in Table S1. The lower than expected Umax in the case of malonic and tartaric electrolytes can be explained by the one order of magnitude larger acid concentration (5 M and 3 M, respectively) used in this process, which are considerably larger than that commonly used. The large concentration leads to more acid anions and higher ionic currents that induce a lower Umax [16,18]. Therefore, the malonic and tartaric acid anodization seems to confirm the rule: the stronger the acid, the lower the Umax. However, the selenic acid solutions and particularly the AHA-based electrolytes are out of this trend [40,44,55,56]. These observations give an indication that other parameters (including the molecular mass of the acid species) and phenomena should be taken into consideration when aluminum is electrochemically oxidized in acidic solutions. Table 5. Molecular mass and structure, dissociation constants pKa (acidity at T~25 °C) [23,29], and conductivity of α-hydroxy acids together with the concentration and Umax applied during anodization of Al. [40]; ** from ref. [44] According to traditional theories of dielectric breakdown, the following relationship holds for the Umax and the breakdown voltage (Uc) [17,18]: Name where E is the electric field across the barrier layer that is often constant, ∼1.0 V/nm; α is the impact ionization coefficient, which is in a reciprocal relationship with the mean free path (λ) of an electron passing a distance of 1 cm (α = 1/λ), and the coefficient α will increase with the E; jo is the oxidation current that corresponds to inward migration of O 2− and leads to the formation of the oxide, jc is the incorporation current that comes from acid anions (jc is considered to be a constant fraction of jo, i.e., jc = γjo). During the anodization, electrolyte species (e.g., acid anions) will migrate to the AAO barrier layer, and as a consequence of the high E, some of them will release primary elec- was next increased to the targeted values listed in Table S1. The anodization was then conducted under the target voltages for a predetermined time. The method, although not named hard anodization, was in fact very similar to the approach applied in the HA [46], and therefore, cannot be directly compared with the other processes gathered in Table S1. The lower than expected Umax in the case of malonic and tartaric electrolytes can be explained by the one order of magnitude larger acid concentration (5 M and 3 M, respectively) used in this process, which are considerably larger than that commonly used. The large concentration leads to more acid anions and higher ionic currents that induce a lower Umax [16,18]. Therefore, the malonic and tartaric acid anodization seems to confirm the rule: the stronger the acid, the lower the Umax. However, the selenic acid solutions and particularly the AHA-based electrolytes are out of this trend [40,44,55,56]. These observations give an indication that other parameters (including the molecular mass of the acid species) and phenomena should be taken into consideration when aluminum is electrochemically oxidized in acidic solutions. Table 5. Molecular mass and structure, dissociation constants pKa (acidity at T~25 °C) [23,29], and conductivity of α-hydroxy acids together with the concentration and Umax applied during anodization of Al. [40]; ** from ref. [44] According to traditional theories of dielectric breakdown, the following relationship holds for the Umax and the breakdown voltage (Uc) [17,18]: Name where E is the electric field across the barrier layer that is often constant, ∼1.0 V/nm; α is the impact ionization coefficient, which is in a reciprocal relationship with the mean free path (λ) of an electron passing a distance of 1 cm (α = 1/λ), and the coefficient α will increase with the E; jo is the oxidation current that corresponds to inward migration of O 2− and leads to the formation of the oxide, jc is the incorporation current that comes from acid anions (jc is considered to be a constant fraction of jo, i.e., jc = γjo). During the anodization, electrolyte species (e.g., acid anions) will migrate to the AAO barrier layer, and as a consequence of the high E, some of them will release primary elec- (COOH) 3.85 ± 0.07 0.5 250-225 V * from ref. [40]; ** from ref. [44] According to traditional theories of dielectric breakdown, the following relationship holds for the U max and the breakdown voltage (U c ) [17,18]: where E is the electric field across the barrier layer that is often constant, ∼1.0 V/nm; α is the impact ionization coefficient, which is in a reciprocal relationship with the mean free path (λ) of an electron passing a distance of 1 cm (α = 1/λ), and the coefficient α will increase with the E; j o is the oxidation current that corresponds to inward migration of O 2− and leads to the formation of the oxide, j c is the incorporation current that comes from acid anions (j c is considered to be a constant fraction of j o , i.e., j c = γj o ). During the anodization, electrolyte species (e.g., acid anions) will migrate to the AAO barrier layer, and as a consequence of the high E, some of them will release primary electrons into the oxide conduction band (electronic current j e0 = ηj c = ηγj o , where γ is determined by the concentration of electrolyte species and η denote the ability of the electrolyte species for releasing electron). These electrons are accelerated by the high E producing the avalanche electronic current (j e ), which should be a fraction (z) of the oxidation current j o , i.e., j e = zj o = zj c / γ, with z ≤ 1/3 [17]. It is believed that the contribution of acid anions to the formation of AAO is negligible because the anions migrate slower than oxygen species, and thus, they require a stronger electric field to reach the oxide/metal interface in a reasonable time [57,58]. However, glycolate, malate, and citric anions seem to actively participate in the AAO growth. This was already noticed by Ma et al. [39] on the occasion of anodization in a CC solution. Based on the results, the authors concluded that the amount of free citric acid anions (i.e., H 2 Cit − , HCit 2− , Cit 3− ) actually plays a crucial role in pore nucleation. Our experiments partially confirmed this conclusion: the close-packed hexagonal structure was obtained only in the higher citric acid concentration (1.5 M), whereas the pores were not formed during anodization in 0.5 M CC. The heavier ionic species require a stronger E to reach the reaction spot, which naturally increases U c (as well as U max ). Most probably, the relatively large molecular mass of selenic acid was responsible for the relation: U max (selenic) ≥ U max (oxalic) despite that pK a (selenic) < pK a (oxalic) (see Table S1, [55,56]). Owing to the lowest molecular mass of GC, the glycolate anions migrate easier to the barrier layer than malate or citrate anions, giving rise to a larger z value, which is manifested in much more intensive currents observed during anodization in GC as compared to the currents recorded during the process performed in the other two AHAs (Figures 5c and 6b). Citric anions, on the other hand, as the heaviest species, require a larger anodizing voltage (300 V) to start the pore formation. Apparently, citric ions also have a lower ability to release electrons (η) compared to malic and glycolic species. Thus, pores are formed under larger anodizing voltages and in higher acid concentrations. In glycolic acid, on the contrary, owing to its relatively large η, three times lower acid concentration (γ) is sufficient to produce regular pore arrays under a much lower (250 V) anodizing voltage. The physical parameters of AHA electrolytes are, however, not sufficient to explain all similarities and differences of anodization in AHAs solution. Since the experimental data strongly suggest that the GC, MC, and CC anions take an active part in AAO growth, their structural features should also be taken into account. It has to be remembered that AHAs can form various complexes with Al ions. Ma et al. [39] have postulated that at the beginning of the process, a certain amount of Al 3+ are consumed by the citrate anions to form the Al-citrate complexes, causing the i a to decrease. As the process proceeds, these complexes transform slowly to citric-acid-incorporated alumina, which randomly precipitates on barrier-type alumina to form protuberances. The high electric field concentrates between those protuberances giving rise to field-assisted oxide dissolution accompanied by the i a increase, and finally to the pore development [40]. Similar processes may occur during anodization in GC and MC electrolytes. Different molecular structures and chemical properties of citric, malate, and glycolate complexes may, in turn, modify to a different extent the field-assisted oxide dissolution process leading to the peculiar behavior of current flow during anodization, which is further reflected in the resulted AAO morphology. It was observed, for instance, that the glycolate ligand coordinates via both the carboxylate and the hydroxy group to Al III ion, forming a binuclear A lII -glycolate complex with three hydrogen bonds connecting two fac-A lII -glycolate complexes [59]. What is more, it was shown that the Al III facilitates the ionization of the hydroxy group of the glycolate. Therefore, it would not be surprising if these complexing properties of GC ions contributed to the AAO growth process giving rise to the observed peculiarities, such as extremely high currents generated during the anodization in this acid and the non-uniform cell size distribution in AAO. Conclusions The anodization in glycolic acid was performed under the electrochemical conditions close to the ones used during the anodization in citric and malic acid solutions, where the self-ordering regimes were operative. Anodization of Al in the three AHA electrolytes was compared. In GC, the pores organize into the hexagonal close-packed structures under the following conditions: 0.5 M, 225-250 V, 5 • C. However, they are grouped into a few areas of different cell sizes. In general, the growth of AAO in the three AHAs follows the Janus type anodization, which is characterized by the same i a (t) stages as in MA, but the magnitude of the generated currents is typical for HA. The peak (i a max ) in the i a (t) curves, which was previously associated with the initiation of pore development, appears after a significantly different anodizing time depending on the AHA used. The process is the fastest in the GC electrolyte. The electric conductivity (σ) of 0.5 M GC, MC, and CC electrolytes decreases in accordance with the acid strength pK a (CC) < pK a (MC) < pK a (GC): σ(CC) > σ(MC) > σ(GC). However, the anodization voltage, under which a self-organized pore formation AAO was observed (U max ), decreased with increasing pK a : U max (CC) > U max (MC) ≥ U max (GC). Moreover, to initiate the pore formation in CC, a three times larger concentration is required than in GC or MC electrolytes, and the i a max (CC) < i a max (MC) < i a max (GC). This peculiar behavior is most probably linked with the diverse propensity of acid ions to complex Al. Depending on the AHA, its tendency and ways to coordinate Al ions, the contribution of stable Al complexes to the AAO growth varies. The molecular structure of the organic ions, as well as the structure of Al complexes, their molecular mass, and ability to lose electrons, play a crucial role in the AAO formation in AHAs electrolytes and seem to be more important than the pK a values of AHAs. The anodization in AHA electrolytes seems to be a promising, environmentally friendly technique to produce robust AAO films with desired anti-corrosive properties. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/ma14185362/s1, Table S1: Dissociation constants (pK a1 and pK a2 ) at~25 • C and molecular mass of selected acids used to produce anodic alumina (AAO) in a given electrolyte concentration, anodizing voltage, and temperature.
2021-09-28T05:22:15.923Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "02fc4cb9a06aa899518f3a108fdabf47810105cc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/14/18/5362/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "02fc4cb9a06aa899518f3a108fdabf47810105cc", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
231767810
pes2o/s2orc
v3-fos-license
Using single-molecule FRET to probe the nucleotide-dependent conformational landscape of polymerase b -DNA complexes Eukaryotic DNA polymerase b (Pol b ) plays an important role in cellular DNA repair, as it fills short gaps in dsDNA that result from removal of damaged bases. Since defects in DNA repair may lead to cancer and genetic instabilities, Pol b has been extensively studied, especially its mechanisms for substrate binding and a fidelity-related conformational change referred to as “ fingers closing. ” Here, we applied single-mole-cule FRET to measure distance changes associated with DNA binding and prechemistry fingers movement of human Pol b . First, using a doubly labeled DNA construct, we show that Pol b bends the gapped DNA substrate less than indicated by previously reported crystal structures. Second, using acceptor-la-beled Pol b and donor-labeled DNA, we visualized dynamic fingers closing in single Pol b -DNA complexes upon addition of complementary nucleotides and derived rates of conformational changes. We further found that, while incorrect nucleotides are quickly rejected, they nonetheless stabilize the polymerase-DNA complex, suggesting that Pol b , when bound to a lesion, has a strong commitment to nucleotide incorporation and thus repair. In summary, the observation and quantification of fingers movement in human Pol b reported Eukaryotic DNA polymerase b (Pol b) plays an important role in cellular DNA repair, as it fills short gaps in dsDNA that result from removal of damaged bases. Since defects in DNA repair may lead to cancer and genetic instabilities, Pol b has been extensively studied, especially its mechanisms for substrate binding and a fidelity-related conformational change referred to as "fingers closing." Here, we applied single-molecule FRET to measure distance changes associated with DNA binding and prechemistry fingers movement of human Pol b. First, using a doubly labeled DNA construct, we show that Pol b bends the gapped DNA substrate less than indicated by previously reported crystal structures. Second, using acceptor-labeled Pol b and donor-labeled DNA, we visualized dynamic fingers closing in single Pol b-DNA complexes upon addition of complementary nucleotides and derived rates of conformational changes. We further found that, while incorrect nucleotides are quickly rejected, they nonetheless stabilize the polymerase-DNA complex, suggesting that Pol b, when bound to a lesion, has a strong commitment to nucleotide incorporation and thus repair. In summary, the observation and quantification of fingers movement in human Pol b reported here provide new insights into the delicate mechanisms of prechemistry nucleotide selection. DNA repair is pivotal for maintaining genome integrity (1). Among the most common damages of DNA are base lesions, in which the chemical structure of a single base has been altered (2,3). These modifications may disturb proper base pairing and can lead to harmful mutations in the genome. In eukaryotes, the base excision repair (BER) pathway is responsible for replacing these damaged bases (4,5). Within BER, the damaged base(s) and the corresponding part of the backbone are removed, creating a gap of one or more bases in the DNA (6). DNA polymerase b (Pol b) then binds to the gap and subsequently fills the gap by adding cognate nucleotides to the 39 end of the primer strand (7,8). Pol b is one of the smallest eukaryotic polymerases and belongs to the X-family of DNA polymerases (9). Pol b consists of a polymerase domain and a lyase domain (10) and was shown to adopt an elongated structure in solution (11,12). Upon binding to gapped DNA, the lyase domain interacts with the 59 phosphate on the downstream strand, while the polymerase domain adopts a structure that has been compared with a hand (10). Crystal structures have suggested that Pol b bends its DNA substrate with an angle of ;90° (13). Incoming nucleotides then bind to a subdomain known as the "fingers," forming the ternary complex. A conformational change called "fingers closing" positions the nucleotide closer to the active site to facilitate chemistry. Studies on Escherichia coli DNA polymerase I (KF), which undergoes a very similar conformational change from an open to a closed conformation, suggested that the fingers do not close entirely when a noncomplementary nucleotide is bound. Instead, an intermediate "ajar" conformation was identified, which serves as a fidelity checkpoint (14)(15)(16). At the same time, incorrect nucleotides were found to promote dissociation of KF from the DNA (17,18). In any cell, noncomplementary nucleotides and ribonucleotides vastly outnumber correct nucleotides. In cancerous cells, the nucleotide concentrations increase further (19), highlighting that effective mechanisms for discriminating correct from noncomplementary nucleotides are pivotal for faithful DNA repair. However, the existence of a fidelity checkpoint during fingers closing is not widely accepted for Pol b. An early study using small-angle X-ray scattering suggested that mismatched ternary complexes exist in a partially closed state (20). In contrast, several later crystal structures with mismatched nucleotides or their nonhydrolyzable analogues showed that the fingers domain adopts an overall closed conformation, although the active site is distorted (21)(22)(23). Fidelity-reducing manganese was necessary to stabilize these mismatched complexes. Studies in presence of physiological magnesium underscore the difficulty for Pol b to form stable closed complexes with incorrect nucleotides (24,25). The presence and nature of a partially closed fingers conformation as a fidelity checkpoint in Pol b therefore remain unknown. Interestingly, the Pol b mutator variant I260Q has recently been reported to exhibit a collapsed fingers domain in the binary complex (26), suggesting that positioning of the fingers domain is important for Pol b fidelity. To study fingers movement of Pol b in more detail, Towle-Weicksel et al. introduced an assay based on ensemble FRET to monitor fingers closing using stopped-flow experiments (24). This approach used Pol b labeled with a fluorophore on the fingers subdomain (at position V303C) and DNA substrates labeled with a quencher. By fitting the stopped-flow traces to a multistep kinetic model, the authors extracted rates for fingers closing and opening in presence of the complementary nucleotide. Noncomplementary nucleotides were not found to induce fingers closing, leading the authors to hypothesize that discrimination between correct and incorrect nucleotides already takes place before fingers closing. In later work, the authors showed that a low-fidelity Pol b mutant found in cancer cells exhibits altered fingers dynamics (25). Here, we developed two single-molecule assays to study the DNA binding behavior and fingers movement of Pol b, for which we used a combination of FRET and total internal reflection fluorescence (TIRF) microscopy in order to monitor hundreds of molecules in parallel and in real time. The first assay uses a doubly labeled gapped DNA substrate to report on binding of unlabeled WT Pol b. We found that strong DNA bending upon binding of Pol b, as suggested by several crystal structures, is not the dominant binding mode. A second assay, inspired by various single-molecule studies on E. coli DNA polymerase I (KF) (16,18,27), employs a similar design as the stopped-flow experiments discussed above; the fingers subdomain of Pol b is labeled with an acceptor fluorophore, whereas a gapped, nonextendable DNA substrate bears the donor fluorophore. The labeling position on the DNA was chosen such that open and closed conformations of the fingers exhibit different FRET efficiencies (E) when Pol b is bound to the surface-immobilized DNA substrate. This approach allowed us to visualize fingers movement of individual Pol b molecules repeatedly in response to either complementary or noncomplementary nucleotides that were added to the buffer. We found that correct nucleotides induce fingers closing; incorrect nucleotides, on the other hand, are quickly rejected by the fingers domain. Contrary to the destabilization of polymerase-DNA complexes that was observed for KF, Pol b binds more tightly to the DNA in the presence of incorrect nucleotides. This suggests that in BER quick repair may be more important than an additional fidelity mechanism. A doubly labeled gapped DNA sensor indicates binding of Pol b First, we assessed binding of WT Pol b to dsDNA with a 1nucleotide gap, mimicking the BER pathway intermediate that is the natural and preferred substrate of Pol b (Fig. 1, A-C) (10). Crystal structures 1BPX and 1BPY suggested that the DNA adopts a sharply bent conformation (;90°) after binding of Pol b. We set out to make bending a direct indicator for polymerase binding. To this end, we labeled our DNA substrate with a donor dye on the primer and an acceptor dye on the template, at positions that are outside the putative binding region of the polymerase (as judged from crystal structures 1BPX and 1BPY), thus creating a "bending sensor." In the absence of DNA polymerases, we found an apparent FRET efficiency E* of 0.37 ( Fig. 2A), corresponding to an interfluorophore distance of 8.1 6 0.1 nm (mean 6 S.E.) after corrections (see Table S2 and Refs. 28 and 29 for a detailed discussion of the correction procedure). A standard B-DNA model of a nongapped construct predicted an inter-dye distance ,RDA. E of 8.3 nm. This value represents the maximum possible distance between the fluorophores (i.e. in the absence of any bending); the slightly lower inter-dye distance for the unbound bending sensor is consistent with the fact that adding the gap in the DNA structure introduces more flexibility and the possibility of visiting bended conformations even in the absence of protein binding. We then tested our sensor with E. coli DNA polymerase I (KF), which was shown to bend gapped DNA (30). Indeed, with increasing concentrations of this polymerase, a second peak at high FRET efficiency emerged (Fig. S1). For Pol b, however, we observed not a second FRET peak but rather an unexpected peak shift. Because of our use of alternating-laser excitation (ALEX) (28,31), in which direct excitation of the donor is alternated with the direct excitation of the acceptor fluorophore to report on the photophysical state of the acceptor, we were able to rule out the possibility that this peak shift was due to protein-induced fluorescence enhancement (32,33). This means that the change in FRET efficiency must be solely due to a distance change and not to interactions between the protein and the dyes (see Fig. S2 for full E*/S histograms and a single-molecule time trace; also see Table S2). We confirmed the formation of Pol b-DNA complexes by EMSA (Fig. S3), which suggested that DNA binding takes place at Pol b concentrations as low as 1 nM, thereby confirming previously reported values in the low nanomolar range (25,34). We then fitted the peak shift in our FRET efficiency histograms using a binding isotherm, introducing an apparent binding constant, K d,app , which indicates the Pol b concentration at which the magnitude of the bend appears to be 50% (Fig. 2B). We obtained a K d,app value of 17 6 4 nM (Fig. 2B) and a FRET efficiency that levels off at E* 5 0.46, corresponding to an inter-fluorophore distance of 7.5 nm after corrections ( Fig. 2C and Table S2). This distance suggests that the bend measured here is less sharp than the bend seen in the crystal structure. Singly labeled Pol b reveals fingers closing in the presence of the correct nucleotide Next, we studied the ability of fluorescently labeled Pol b to report on the conformation of the fingers subdomain ( Fig. 1, D and E). Reasoning that the different labeling positions may have an influence on the K d , we first measured binding of the labeled polymerase to the singly labeled DNA construct by recording time traces at increasing concentrations of Pol b. We identified binding events and constructed dwell time histograms to obtain k off and k on (Fig. S4). Although nonspecific adsorption made measuring at concentrations higher than 50 nM impossible, we inferred that the K d must be around 100 nM. This value represents an upper limit, since it does not take into account the labeling efficiency (60-70%; see "Experimental Procedures") and acceptor photobleaching. Additionally, we found that ;6% of all binding events involved two acceptors, with this number being independent of the protein concentration in the range that we measured. We assume that these are doubly labeled proteins, since cysteine 178 (although buried) is still in place in this version of the polymerase and could have been labeled with Table S2 for the correction procedure. Each data point represents the mean of three independent experiments. Error bars indicate the S.E. C, modelled and experimentally determined inter-fluorophore distances of both the native bending sensor and the bent conformation are shown. Each experimental value represents the mean of triplicate measurements. very low efficiency. This is further supported by the observation that the two acceptors almost always appeared together in the single-molecule time traces. To measure fingers closing, we decided to use a Pol b concentration of 10 nM. Although far lower than the K d , we preferred this concentration since it resulted in very low nonspecific adsorption while giving a reasonable number of binding events. We performed a titration of Pol b with increasing concentrations of the complementary nucleotide dTTP opposite template A (i.e., A-dTTP). Time traces of single, donor-labeled DNA showed binding events of single, acceptorlabeled Pol b as an increase in the acceptor emission after acceptor excitation (AA) and the appearance of FRET (Fig. 3A). Hidden Markov modelling (HMM) was used to identify the open (low E*) and closed (high E*) conformations within time traces of individual binding events (Fig. 3B). At a dTTP concentration of 1 mM, traces predominantly show low FRET efficiency, with only brief excursions to the high FRET effi-ciency that is associated with closed fingers. At higher dTTP concentrations, longer residence times in the closed state are observed. We constructed FRET efficiency histograms and indicated the open and closed populations, as determined by HMM (Fig. 3C). In the absence of dTTPs, the fingers mostly adopt the open conformation (92%). With increasing dTTP concentration, the closed conformation is increasingly populated. At a dTTP concentration of 50 mM, the fingers are mostly closed (95%). Using accurate FRET, we determined the distances associated with open and closed fingers (Table S3). We found interfluorophore distances of 6.5 6 0.0 (0.047) nm for the open and 5.7 6 0.1 nm for the closed conformation. These distances are in excellent agreement with the distances of 6.4 nm and 5.5 nm predicted from structural modelling with the FPS software (see "Experimental Procedures"). We note that the less sharp bend observed in our DNA substrate does not appear to alter the distance between the fingers domain and the primer. Dwell time histograms of the open and closed conformations were constructed and fitted with exponential decay curves (Fig. S5). The rate of fingers closing is extracted from the dwell times in the open conformation, while the rate of fingers opening is extracted from the dwell times in the closed conformation. Plotting k obs,close and k open against the concentration of dTTP showed that the closing rate is concentration dependent whereas the opening rate remains largely constant, with a slight decrease that we attribute to missed events at higher concentrations of dTTP. A model that links fingers closing to the affinity of complementary nucleotides without accounting for fingers closing in the binary complex was previously described and applied to stopped-flow data (Fig. 3D) (24,25). In our data, the rare fingers closing in the binary complex (measured at a dTTP concentration of 0 mM) did not allow us to construct a suitable dwell time histogram even though we analyzed more than 1000 binding events. We adapted the model to exclude any steps after fingers closing, as the use of dideoxy-terminated primer DNA prevents the incorporation of nucleotides. Thus, the concentration of dTTPs relates to k obs,close as follows: in which K 1 is the association constant for dTTPs and k 2 is the closing rate with dTTP bound to the fingers. We fitted our data with no constraints for K 1 and k 2 (Fig. 3E) and found a k 2 of 80 6 98 s 21 (mean 6 S.E.) and a K 1 of 0.019 6 0.027 mM 21 (corresponding to a K d(dTTP) of 53 mM). The large errors are due to experimental constraints such as the limited acquisition rate of the camera (40 s 21 ). At dTTP concentrations higher than 10 mM, the lifetime of the open conformation is often too short to be clearly resolved; similarly, lifetimes of the closed state shorter than 50 ms are difficult to resolve. We noted that the duration of Pol b-DNA binding events increased with increasing dTTP concentrations by observing a decrease in k off , while k on was not affected (Fig. 3F). This finding indicates that complementary nucleotides stabilize the polymerase-DNA complex. The fingers domain quickly rejects incorrect dGTPs and rUTPs Previous work with DNA polymerase I (KF) has shown that increasing concentrations of an incorrect nucleotide shift the position of the fingers open peak toward a slightly higher FRET efficiency, likely caused by the polymerase quickly screening and rejecting incorrect nucleotides (15,18). This shift in FRET efficiency has been linked to the existence of a partially closed conformation of the fingers for DNA polymerase I. To investigate the potential existence of a similar conformation in Pol b, we studied the positioning of the fingers in the presence of increasing concentrations of incorrect dGTPs and rUTPs. Binding events were identified in the single-molecule time traces and used to construct FRET efficiency histograms (Fig. 4, A and B). Indeed, the previously identified high FRET state as seen for complementary dTTPs is not present; instead, a shift in E* from ;0.56 to ;0.61 is observed, reminiscent of what has been shown for DNA polymerase I (KF). We note that neither the FRET efficiency distributions nor the single-molecule time traces show two separate states. We attribute this to temporal averaging; the conformational changes occur faster than the acquisition rate of our camera (40 s 21 ), preventing us from directly detecting transitions from the open to a closed or partially closed (Table S4). Error bars indicate the S.E. C, rates k on and k off at increasing concentrations of dGTPs. k off decreases with increasing concentrations of dNTPs, while k on remains constant. Each data point represents the mean of three independent experiments (see Fig. S6 for dwell time histograms). Error bars indicate the S.E. state, even when using the HMM of ebFRET. It should be noted that these transitions were also not directly detectable for doubly labeled DNA polymerase I (KF) in the work of Evans et al., for which a higher time resolution was used (100 s 21 ) (18). Next, we asked whether incorrect nucleotides have an influence on the stability of the polymerase-DNA complex. Markiewicz et al. and Evans et al. showed, that for DNA polymerase I (KF), noncomplementary dGTPs increase k off (17,18). We used all polymerase binding events in our time traces to construct dwell time histograms (Fig. S5). Fitting with an exponential decay function yielded values for k on and k off for every nucleotide concentration of the titration series (Fig. 4C). Interestingly, k off decreases slowly with increasing dGTP concentrations, mimicking the trend seen for correct dTTPs. This indicates that for Pol b bound to gapped DNA even incorrect nucleotides stabilize the polymerase-DNA complex, albeit at higher concentrations than the correct dNTP. Discussion The use of single-molecule FRET allowed us to observe and analyze conformational changes of individual Pol b-DNA complexes in real time, thereby overcoming some of the ensemble averaging inherent to conventional fluorescence-based techniques, such as stopped-flow experiments. Our experiments with WT Pol b showed substrate binding with an apparent K d of 17 6 4 nM. Gel mobility shift assays, as performed by us and others, also resulted in values in the low nanomolar range (25), while a titration based on single-turnover analysis at different DNA concentrations revealed a K d of 22 nM (34). Our distance measurements on the bending sensor, however, revealed that DNA bending upon polymerase binding occurs to a smaller extent than predicted by various structures resolved with X-ray crystallography. The inter-fluorophore distances that we calculated for the fingers conformational change (between residue V303C and the primer) are consistent with crystal structures 1BPX and 1BPY, implying that it is the flexible positioning of the downstream strand that determines the bend. We further studied the conformational change associated with fingers closing using fluorescently labeled Pol b. We observed an increase in the rate of fingers closing with increasing concentrations of the complementary dNTP, as expected for an induced fit mechanism. Previous studies in Pol b (24,25) found a rapid fingers closing rate of 98 s 21 , close to the maximum rate of 80 s 21 that we determined from our fit. Additionally, we showed that polymerase-DNA complexes become more stable with increasing dTTP concentrations, due to a decrease in k off . The K d of the incoming correct nucleotide (1/ K 1 5 53 mM for dTTP) is higher than that determined before using chemical quench analysis (K d 5 2.5 mM) (24,25). We note that the K d as defined in our model represents the dNTP concentration at which the fingers closing rate is at half-maximum, whereas the K d as defined using chemical quench analysis includes a fast chemistry step that is likely preventing nucleotide binding from reaching the equilibrium, as illustrated by Kellinger and Johnson (35). We found a small increase in FRET efficiency of the fingers when supplying noncomplementary nucleotides. We were ini-tially tempted to attribute this increase to formation of a partially closed fingers conformation. However, we are cautious of doing so, since the existence of such a conformation has been challenging to show using single-molecule techniques. An early study on DNA polymerase I (KF), using a single label on the protein and a single label on the DNA, showed ternary complexes in a state with a FRET efficiency between open and closed fingers in single-molecule time traces (16). The time resolution of these traces was 100 ms, suggesting that the ajar conformation is stable on this time scale and that rejection of incorrect nucleotides is therefore relatively slow. Another study on the same polymerase used a design with both fluorophores on the enzyme (18). That study revealed an increase in FRET efficiency very similar to what we observe, being visible on the population level but not in individual traces (which were acquired at 10-ms resolution). The authors attributed this increase in FRET efficiency to an ajar conformation of the fingers domain, reasoning that a rapid rejection mechanism for incorrect nucleotides prevented them from observing the conformation directly in their single-molecule time traces. Fast rejection is indeed necessary to reach the expected DNA polymerase I synthesis rate of ;15 nucleotides per second (36). Even though we cannot directly detect an ajar conformation in our Pol b time traces, we can draw important conclusions about the underlying fidelity mechanism of the enzyme. Like DNA polymerase I (KF), Pol b rejects incorrect nucleotides faster than the acquisition rate of the camera (40 s 21 ). This is much faster than fingers opening in the presence of correct dTTPs, which we were able to measure directly at ;4 s 21 . Thus, even though increasing concentrations of incorrect nucleotides will drive the equilibrium toward a closed (or partially closed) state of the fingers, these excursions are always short. With our limited time resolution, temporal averaging of these events leaves individual excursions irresolvable but results in a slight overall increase in FRET efficiency. Despite the fast rejection rate, we showed that noncomplementary nucleotides stabilize the Pol b-DNA complex, similar to the effect seen for correct dTTPs. Apparently, the presence of both correct and incorrect nucleotides can enhance binding to a gapped DNA substrate up to at least 2-fold by decreasing k off . Interestingly, this is fundamentally different from the destabilizing effect observed for incorrect nucleotides with DNA polymerase I (KF) (17,18). We note that, for Pol b, stabilization is in accordance with a linear reaction pathway, as illustrated in Fig. 3D, and an increase in dNTPs, correct or incorrect, will shift the equilibrium to the right (and make incorporation more favorable). For DNA polymerase I (KF), the pathway is apparently not linear; incorrect nucleotides not only lead to fast rejection but also can lead to disassembly of the entire ternary complex (17,18). Although we do not know at exactly what stage in the reaction path this disassembly happens, it is a mechanism that Pol b does not seem to possess. Given its role to quickly repair damaged DNA in the BER pathway, however, stable gap binding as well as incorporation of an incorrect nucleotide may be more beneficial for Pol b than an additional fidelity mechanism. It will be interesting to see in follow-up single-molecule studies how the balance between fingers closing and reopening is affected in mutator variants of Pol b, similar to what has been done for DNA polymerase I (KF) (15). In conclusion, the direct observation of fingers movement allowed us to obtain conformational rates that describe the dynamic equilibria of individual ternary complexes under prechemistry conditions, shining light on the mechanism of nucleotide selection by Pol b. Future experiments, ideally complemented by using doubly labeled Pol b, will further help elucidate the dynamic-structural relations between DNA, (mutator) Pol b, and other enzymes of the BER. Polymerase purification and labeling Here we use the term WT Pol b to refer to Pol b bearing the substitutions C239S, C267S, and V303C, introduced to have a single cysteine residue on the fingers subdomain that can react with the fluorophore bearing a maleimide moiety (24). For the assays in which the fingers conformational change was studied, the V303C was labeled with Alexa Fluor 647 following procedures described before (24). The labeling efficiency was 60-70%, as determined by absorbance measurements (data not shown). For experiments with E. coli DNA polymerase I (KF), we used the D424A mutant, which abolishes the 39 to 59 exonuclease activity. DNA substrate design As a first step to construct a gapped DNA construct labeled at adequate positions with fluorophores, we examined crystal structures 1BPX and 1BPY (13), which represent Pol b bound to gapped DNA with open and closed fingers, respectively. We extended the DNA from the crystal structures on both sides of the polymerase with a B-DNA helix, using the 3D-DART server (37). Next, we used FPS (short for FRET-restrained positioning and screening) software to model the accessible volumes of the fluorophores at potential labeling positions on the DNA and the determine inter-dye distances ,RDA. E (38). Modelling parameters include the dimensions of the fluorophore and the dimensions of the linker (Table S1). We selected two labeling positions (positions 215 and 112; see Fig. 1, A-C) that are located outside the binding region of Pol b. Using Cy3B as a donor fluorophore at the 215 position and Cy5 as an acceptor at position 112, these positions are within the distance range for FRET (R 0,Cy3B!Cy5 5 6.9 nm, ,RDA. E,model 5 6.0 nm). Additionally, the Cy3B at the 215 position is close enough to the fingers subdomain to exhibit FRET with the Alexa Fluor 647 (R 0,Cy3B!Alexa Fluor 647 5 6.9 nm, ,RDA. E,fingers open 5 6.4 nm, ,RDA. E,fingers closed 5 5.5 nm) (Fig. 1, D and E). Importantly, these distances translate to a large difference in FRET efficiency (E) between the open (E 5 0.60) and closed (E 5 0.79) conformations of the fingers. We annealed the 1-nucleotide gapped DNA complex using template A from a 30-mer dideoxy-terminated primer sequence (biotin-59-CCT CAT TCT TCG TCC CAT TAC CAT ACA TCC H -39), a 55-mer template sequence (59-CCA CGA AGC AGG CTC TAC TCT CTA AGG ATG TAT GGT AAT GGG ACG AAG AAT GAG G-39), and a 24-mer downstream complementary strand (59-phosphate-TAG AGA GTA GAG CCT GCT TCG TGG-39), which we ordered from IBA Life Sci-ences (Germany) and Eurogentec (Belgium). All oligonucleotides were HPLC or gel purified prior to use. The dideoxy-terminated primer prevents incorporation of the nucleotide, allowing us to study only the prechemistry steps. The primer sequence was internally labeled with donor dye Cy3B through a C6 linker at the previously determined 215 cytosine base; for experiments with unlabeled WT Pol b, also the template was internally labeled with Cy5 through a C6 linker at the 112 thymine base. EMSAs 32 P-labeled gapped DNA (0.1 nM, the same sequence as the bending sensor) was mixed with a range of WT Pol b concentrations (0 or 0.06-500 nM) in a buffer containing 10 mM Tris-HCl, pH 7.6, 6 mM MgCl 2 , 100 mM NaCl, 10% glycerol, and 0.1% IGEPAL (a nonionic, nondenaturing detergent). Samples were loaded on 6% native polyacrylamide gels, which were run at 150 V for 3 h. The shift was imaged on a phosphor screen. TIRF experiments Labeled DNA molecules were immobilized on PEGylated glass coverslips using a protocol described before (18). We used flow channels formed by Ibidi sticky-Slides VI 0.4 . Molecules were imaged on a home-built TIRF microscope, described in more detail elsewhere (39). All experiments were performed using ALEX, in which the direct excitation of the donor alternates with the direct excitation of the acceptor fluorophore (28,31). Experiments on WT Pol b and doubly labeled DNA were performed with laser powers of 1.5 mW (l 5 561 nm) and 1.5 mW (l 5 638 nm). The excitation time and camera frame time were set to 50 ms. Raw FRET efficiency (E*) was calculated using E* 5 DA/(DD 1 DA), in which DD is donor emission intensity after donor excitation and DA is acceptor emission intensity after donor excitation (FRET). Acceptor emission intensity after acceptor excitation, AA, as obtained during ALEX, was used for time trace selection. Experiments with fluorescently labeled Pol b were performed with laser powers of 1.5 mW (l 5 561 nm) and 0.75 mW (l 5 638 nm). The excitation time and frame time were 25 ms. Surface-immobilized DNA molecules were imaged in a buffer containing either WT Pol b (3, 10, 30, 60, 100, 200, and 300 nM) or labeled Pol b-V303C-Alexa Fluor 647 (10 nM in experiments with dTTPs, 20 nM in experiments with dGTPs and rUTPs). Imaging buffer further contained 50 mM Tris, pH 7.5, 10 mM MgCl 2 , 100 mM NaCl, 100 mg/ml BSA, 5% glycerol, 1 mM DTT, 1 mM Trolox, 1% gloxy, and 1% glucose. Experiments conducted without NaCl are marked in the text. Trolox is a triplet state quencher (40); gloxy and glucose form an enzymatic oxygen scavenger system to prevent premature fluorophore bleaching (41). When specified, complementary dTTPs were added to achieve final concentrations of 0.1, 0.5, 1, 2, 5, 10, and 50 mM; concentrations of incorrect dGTPs and rUTPs were 10, 30, 100, 300, 1000 and 3000 mM. Time trace selection and HMM Time traces from individual molecules were collected to measure polymerase binding times and to extract dwell times of open and closed fingers conformations. Because of variations in the signal-to-noise ratio of molecules, as well as the presence of bleaching and blinking, an initial selection of molecules was made by hand; only DNA molecules that showed a constant DD1DA signal with sudden transitions (within 1 frame) from the free state to the bound state were selected. To determine the binding times, we first applied a 5-frame moving median filter to all selected traces, before applying additional selection criteria: 1) the sum of DD and AA is higher than 50 photons and 2) the FRET efficiency is higher than 0.4. Additionally, settings were such that the disappearance of donor signal (bleaching) was interpreted as the end of the trace and the disappearance of acceptor signal (bleaching or polymerase dissociation) was interpreted as the end of a binding event. The design of the experiment does not allow us to distinguish between acceptor bleaching and polymerase dissociation. Filtering traces following these criteria sometimes resulted in longer binding events being cut in multiple shorter events due to remaining noise. To prevent these cases, an exception was added to allow singlepoint excursions to lower intensities or FRET efficiencies. The final algorithm was found to identify most binding events that are also detectable by visual inspection. Extremely short events, however, were often not detected because of the median filter. These events may therefore be under-represented in our dwell time histograms. In the end, this approach of data selection resulted in 30-50% of all identified single-molecule traces being used for later analysis. We consider this a good yield, since the remainder contains not only traces with misidentified events but also traces without any binding events at all and traces that are too noisy. For extraction of fingers conformational changes, binding events from experiments with dTTPs were loaded into ebFRET, a software package for HMM (42). Because the final data point of some binding events may have a donor or acceptor that is already decreasing in intensity (just before the cutoff value that we set for dissociation or bleaching), we removed these points from the traces by applying a padding of 10 time points. Next, the prior for the minimal center position (open fingers) was set to E* 5 0.4 and that for the maximal center position (closed fingers) to E* 5 0.8. The convergence threshold was set to 10 25 .
2020-05-10T13:04:37.053Z
2020-05-08T00:00:00.000
{ "year": 2020, "sha1": "9554dbf1b38596910ee66d401f67000720a377d9", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/article/S0021925817503245/pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "cc8e0ea03668d1f892fb1b7acd31c8fe73e71193", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
259270197
pes2o/s2orc
v3-fos-license
Investigating the Role of TGF-β Signaling Pathways in Human Corneal Endothelial Cell Primary Culture Corneal endothelial diseases are the leading cause of corneal transplantation. The global shortage of donor corneas has resulted in the investigation of alternative methods, such as cell therapy and tissue-engineered endothelial keratoplasty (TEEK), using primary cultures of human corneal endothelial cells (hCECs). The main challenge is optimizing the hCEC culture process to increase the endothelial cell density (ECD) and overall yield while preventing endothelial–mesenchymal transition (EndMT). Fetal bovine serum (FBS) is necessary for hCEC expansion but contains TGF-βs, which have been shown to be detrimental to hCECs. Therefore, we investigated various TGF-β signaling pathways using inhibitors to improve hCEC culture. Initially, we confirmed that TGF-β1, 2, and 3 induced EndMT on confluent hCECs without FBS. Using this TGF-β-induced EndMT model, we validated NCAM as a reliable biomarker to assess EndMT. We then demonstrated that, in a culture medium containing 8% FBS for hCEC expansion, TGF-β1 and 3, but not 2, significantly reduced the ECD and caused EndMT. TGF-β receptor inhibition had an anti-EndMT effect. Inhibition of the ROCK pathway, notably that of the P38 MAPK pathway, increased the ECD, while inhibition of the ERK pathway decreased the ECD. In conclusion, the presence of TGF-β1 and 3 in 8% FBS leads to a reduction in ECD and induces EndMT. The use of SB431542 or LY2109761 may prevent EndMT, while Y27632 or Ripasudil, and SB203580 or SB202190, can increase the ECD. Introduction The human corneal endothelium consists of a single layer of 300,000 to 500,000 (depending on age and individual factors) corneal endothelial cells (CECs) covering the inner surface of the cornea. These cells maintain corneal transparency through their ion pump functions. In vivo, most human CECs (hCECs) are arrested at the G1 phase of the cell cycle [1,2]. In response to various diseases or damage, hCECs rely on cell migration and enlargement for repair [2,3]. Since the regeneration capacity of hCECs is limited in vivo, ex vivo, and in vitro, the quality of endothelial tissue is primarily evaluated using the endothelial cell density (ECD), measured in cells/mm 2 , in patients as well as in stored corneas and cultured cells. Corneal endothelial diseases are common and are dominated by three main causes: a primary disease of the endothelium called Fuchs endothelial corneal dystrophy (FECD), an iatrogenic disease called pseudophakic bullous dystrophy (PKB), and the limited survival of some corneal grafts. FECD affects between 4% and 11% of adults over the age of 40 in the western world, to varying degrees [4]. PKB is a rare complication (between 0.1% and Primary Culture of hCECs The hCECs were obtained by primary culture, and the culture method is described in our previous study, which was a result of a collaboration with Professor Koizumi and the Okumura team [29]. Briefly, Descemet's membrane, including the corneal endothelium, was mechanically peeled off from donor corneas, followed by incubation in 500 µL of Descemet's membrane digestion medium at 37 • C for 16 h. After digestion, the released CECs were washed in OptiMEM-I using centrifugation at 200× g for 5 min. The cells were then seeded in a 24-well plate (190 mm 2 /well) coated with iMatrix-511 (892012, Nippi) and cultured in complete growth medium containing 8% FBS. The culture plate was kept in an incubator with a humidified 5% CO 2 atmosphere at 37 • C, and the medium was replaced with fresh culture medium once per week until confluence was reached (passage (P)0). Subsequent passaging of the cells was performed at a ratio of 1:2, and the time from cell seeding to subculturing was approximately one month. We used seven hCEC cultures from seven different donor corneas that were preserved using the organ culture storage method. The donors' ages ranged from 58 to 82 years, with an average of 72 ± 10 years (four females and three males). The initial ECD was 2624 ± 487 (1920-3248) cells/mm 2 in the donor corneas. These seven cultures were selected based on their typical morphologies of hCECs, with an ECD higher than 1200 cells/mm 2 . Immunofluorescence (IF) The hCECs were seeded in 384-well plates pre-coated with iMatrix-511 at a ratio of 500 cells/mm 2 and cultured in BGM alone (control) or BGM containing TGF-β1, 2, or 3, or one of the TGF-β cell signaling pathway inhibitors, for four weeks. The IF protocol was previously described by our team [29,42]. Briefly, cells were fixed in pure methanol at room temperature for 15 min after rinsing with PBS containing Ca 2+ and Mg 2+ . The cells were then rehydrated in PBS and incubated in blocking buffer (PBS, 2% bovine serum albumin, 2% goat serum) for 30 min at 37 • C. The primary antibodies, diluted to 1/300 in blocking buffer, were incubated with cells at 37 • C for one hour under gentle agitation (30 rpm). After three rinses in PBS, the secondary antibodies, diluted to 1/600, and DAPI diluted at 2 µg/mL in blocking buffer were incubated with cells at 37 • C for one hour under gentle agitation. After three rinses in PBS, the cells were immersed in Fluoromount-G TM mounting medium (00-4958-02, Invitrogen) to protect the fluorochromes (Alexa Fluor™ 488 and DAPI). An epifluorescence inverted microscope (IX81, Olympus, Tokyo, Japan) with the CellSens software (Soft Imaging System GmbH, Olympus) was used to acquire images. Methods of Quantification EndMT characterization by cell shape analysis This analysis was performed based on phase-contrast images of hCECs obtained using a phase-contrast microscope (CKX41, Olympus) with a 10× objective. Due to the suboptimal quality of images from the 384-well culture plates (border effect), we used 24-well plates for cell culture and phase-contrast imaging. This analysis was performed only for the TGF-β-induced EndMT on confluent hCECs incubated with SFM (Result 1). Phase-contrast images of the cells were analyzed based on cell shape. Since typical hCECs possess a hexagonal or polygonal shape in vitro, while mesenchymal or fibroblast cells are elongated, we used the aspect ratio (AR) as the criterion to characterize EndMT. The AR was the ratio of the major axis to the minor axis of each cell. When the AR was close to 1, the cells were typical hCECs, whereas a higher AR indicated EndMT. The protocol for the AR measurement is detailed in Supplementary Data S1. ECD measurement To determine the ECD, we counted the number of cell nuclei stained with DAPI per square millimeter. All ECD counts were performed in 384-well plates pre-coated with iMatrix 511. To evaluate the effects of different molecules (TGF-β1, 2, or 3, neutralizing antibody anti-TGF-βs, and 14 TGF-β signaling pathway inhibitors) on cell expansion, hCECs were seeded and cultured in BGM (control) or BGM containing one of the molecules to be evaluated (Experiment/Result 3 and 4). The cultures were maintained for 4 weeks. An epifluorescence microscope equipped with a X10 objective was used to acquire one image per well, and the number of nuclei was counted on the entire surface of each image (1.6 mm 2 ) using a home-made plugin installed in ImageJ. This microscopic field, representing 16% of the well surface, allowed the reliable measurement of ECD. EndMT analysis by fluorescence intensity of biomarkers Assessment of EndMT is crucial during hCEC culture and requires the use of an appropriate biomarker. In this study, we investigated EndMT via the fluorescence intensity of biomarkers in two steps. Firstly, we evaluated three specific biomarkers of hCECs (NCAM, CD166, and Na + /K + ATPase) and three potential biomarkers of EndMT (CD73, TGFBI, and Col 5A1) using our TGFβ-induced EndMT model to select the most appropriate biomarker (Experiment/Result 2). We hypothesized that TGF-βs could decrease the fluorescence intensity of the three hCEC biomarkers and increase the fluorescence intensity of the three EndMT biomarkers. Secondly, we employed the selected biomarker (NCAM) to assess the effects of TGF-βs, their neutralizing antibody, and different TGF-β signaling pathway inhibitors on EndMT in hCECs that were cultured in the presence of FBS. All of these experiments were conducted using IF in 384-well plates. Alexa Fluor TM 488 was used as the fluorescence tracker linked with the secondary antibody. Image acquisition was performed using an epifluorescence microscope with a FITC filter cube (Ex: 450-490 nm, Em: 500-550 nm, DM filter: 495 nm), with one image captured per well using the X10 objective. All parameters, including the intensity of the light source, exposure time, and image resolution, were consistently maintained throughout the experiments. Fluorescence control (IF using only the secondary antibody) was performed for each experiment. The mean gray value of each image was measured using Image J, and the staining intensity was calculated by subtracting the mean gray value of the fluorescence control from the mean gray value of the biomarker. Data standardization for quantitative comparisons In order to compare data across different cell culture replicates (obtained from different donors) and experiments (performed at different times), we standardized the data (measured in terms of AR, ECD, or fluorescence intensity) by comparing them to their own control groups (untreated hCECs). For example, in cell cultures A and B (from corneas of different donors), the ECD was 1000 and 1300 cells/mm 2 , respectively, in the control group (cells cultured in the medium without any evaluated molecules), and it was 650 and 900 cells/mm 2 , respectively, under TGF-β1 treatment. To enable comparison of these data between cultures A and B, we standardized them by calculating the ratio to their own controls. The standardized ECD of TGF-β1 was 0.65 for culture A (i.e., 650/1000) and 0.69 for culture B (i.e., 1000/1300). Normalization of NCAM fluorescence intensity according to ECD Since NCAM is present on the lateral membranes of CECs, the fluorescence intensity of NCAM staining may fluctuate with the ECD independently of EndMT. To address this, we normalized the signal on the cell perimeter, rather than solely on the ECD. To estimate the perimeters of hCECs from their ECDs, we assumed that hCECs were regular hexagonal cells. We calculated the perimeter from the ECD as follows: area of a regular hexagon (A) = (3 √ 3 side length 2 )/2, perimeter = 6 × side length, and A = 1,000,000 µm 2 /ECD. Thus, the perimeter per cell (µm) was 6 √ (2,000,000/(3 √ 3 × ECD)). We calculated the fluorescence intensity per cell as follows: total fluorescence of the image = mean gray value x total pixels of image, total cell number in image = surface area (mm 2 ) × ECD (cells/mm 2 ). Thus, the fluorescence intensity per cell = (mean gray value × total pixels of image)/(surface of image × ECD). The normalized fluorescence intensity = fluorescence intensity per cell/perimeter per cell. Let us take TGF-β1-treated cells and control cells as an example. The mean gray value for TGF-β1 and the control is 100 and 200, respectively, with an ECD of 900 cells/mm 2 and 1300 cells/mm 2 , respectively. The image resolution is 1024 × 1024 pixels, and its surface area is 1.6 mm 2 . To normalize the fluorescence intensity of a single cell by its perimeter, we used the formula (100 × 1024 × 1024/(1.6 × 900))/(6 √ (2,000,000/(3 √ 3 × 900))) for TGF-β1 and (200 × 1024 × 1024/(1.6 × 1300))/(6 √ (2,000,000/(3 √ 3 × 1300))) for the control. We calculated the ratio of the normalized fluorescence intensity of TGF-β1 to the control to standardize the data. This ratio is (100/200)/ √ (900/1300). The standardized fluorescence intensity is 100/200, and the standardized ECD is 900/1300. Therefore, the standardized normalized fluorescence intensity for TGF-β1 is the standardized fluorescence/ √ standardized ECD. The normalized fluorescence intensity was used in Experiments 3 and 4, where the ECD of some groups was significantly different from that of the control group. Statistics To compare the means of three or more independent groups and determine statistical significance, we used the one-way analysis of variance (ANOVA) test. When a significant difference was found, we performed Tukey's honestly significant difference (HSD) post-hoc test to identify which treatments (TGF-βs or inhibitors) were significantly different from the controls and significant differences between any two groups. Graphs displayed the mean and standard deviation (SD) for each group. The data presented above each bar represented the mean ± SD (min, max). Statistical analysis and graph construction were performed using GraphPad Prism. The Three TGF-β Isforms Induced EndMT on Confluent hCECs in Absence of FBS We first examined the impacts of the three TGF-β isoforms on confluent hCECs in the absence of FBS, since FBS contains various growth factors, including TGF-βs, as well as various hormones and biologically active substances that can potentially counteract the effects of added TGF-βs. The hCECs were cultured in complete growth medium for four weeks until full confluence in 24-well plates (for cell morphology analysis) and in 384-well plates (for ECD analysis). The cells were then exposed to SFM alone (control) or with 10 ng/mL of TGF-β1, 2, or 3 for seven days. To confirm that the observed biological effects were attributable to TGF-βs, we added 10 µg/mL of a neutralizing antibody against TGF-βs (1, 2, 3). The EndMT was characterized by analyzing the cell shape and quantified using the aspect ratio (AR) parameter, and the ECD was also assessed. Cell elongation that corresponded to the induction of EndMT was consistently observed when TGF-βs were present, particularly TGF-β1 and 3. The neutralizing antibody against TGF-βs reversed these morphological changes ( Figure 1A). Quantitative analysis confirmed the observation ( Figure 1B). The AR of cells treated with TGF-β1, 2, or 3 was significantly higher than that of the control, and the AR of TGF-β1 and 3 was significantly higher than that of TGF-β2. The neutralizing antibody against TGF-βs significantly reversed or reduced the effects of TGF-βs. Although the neutralizing antibody could not completely reverse the effects of TGF-β3, a significant decrease was observed with TGF-β3 + neutralizing antibody compared to TGF-β3 alone. Exposure to TGF-βs or the neutralizing antibody did not significantly influence the ECD ( Figure 1C) Selection of NCAM as Biomarker to Assess EndMT We used the TGF-β-induced EndMT model (in step 1) to identify a biomarker that could be used to quantify EndMT more easily and accurately. After the cells reached confluence, we induced EndMT in hCECs by incubating them in 384-well plates with SFM alone (control) or containing 10 ng/mL of TGF-β1, 2, or 3 for 7 days. We assessed each of the six preselected biomarkers (NCAM, CD166, Na + /K + ATPase, CD73, TGFBI, and Col 5A1) using the IF technique. The images were acquired using the same parameters under a X10 objective during the same experiment ( Figure 2A). The fluorescence intensity of each biomarker was then measured. A biomarker suitable for evaluating EndMT should exhibit a significant difference in fluorescence intensity between control and TGF-β-treated cells. EndMT was assessed by cell morphology represented by the AR, which is the ratio of the major axis/minor axis of each cell. The AR data were obtained from manually tracing the contours of cells one by one from phase-contrast images (Supplementary Data S2). The higher the AR, the more severe the EndMT. n ≥ 87 cells for each group. (C) Comparison of the standardized ECD. ECD was assessed by nuclei staining using DAPI. n = 16 wells. At least two different cell cultures from two different donors were used for quantitative analysis of AR and ECD. Standardization of data was performed for each experiment and each cell culture by calculating the ratio with their own control/-(SFM alone without TGF-β, without neutralizing antibody). Only the significant differences from control/-were denoted by red stars on the corresponding bars: *** p < 0.0001. Selection of NCAM as Biomarker to Assess EndMT We used the TGF-β-induced EndMT model (in step 1) to identify a biomarker that EndMT was assessed by cell morphology represented by the AR, which is the ratio of the major axis/minor axis of each cell. The AR data were obtained from manually tracing the contours of cells one by one from phase-contrast images (Supplementary Data S2). The higher the AR, the more severe the EndMT. n ≥ 87 cells for each group. (C) Comparison of the standardized ECD. ECD was assessed by nuclei staining using DAPI. n = 16 wells. At least two different cell cultures from two different donors were used for quantitative analysis of AR and ECD. Standardization of data was performed for each experiment and each cell culture by calculating the ratio with their own control/-(SFM alone without TGF-β, without neutralizing antibody). Only the significant differences from control/-were denoted by red stars on the corresponding bars: *** p < 0.0001. calculating the ratio with that of NCAM. Two different cultures from different donors were used (n = 12 wells). (C) Changes in fluorescence intensity under TGF-β 1, 2, or 3 for each of the 6 biomarkers. The sensitivity of each biomarker to EndMT of hCECs was evaluated by applying each of the 6 biomarkers with SFM alone (control) or containing TGF-β1, 2, or 3. The data were standardized by calculating the ratio with their own controls. n = 6 wells from 3 cell cultures were used for each group (control or TGF β1, 2, or 3) and for each biomarker. Significant differences by comparison to NCAM (B) and to controls (C) were marked with red stars on the corresponding bars: *** p < 0.0001, ** p < 0.001, * p < 0.005. NCAM and Na + /K + ATPase showed significant differences between TGF-β-treated cells and non-treated cells (control/SFM alone) ( Figure 2C). NCAM was more sensitive than Na + /K + ATPase due to the greater decrease in its mean intensity induced by TGF-βs. In addition, NCAM exhibited a significantly higher baseline fluorescence intensity compared to Na + /K + ATPase ( Figure 2B), which is of practical significance. When a marker displays a strong fluorescence signal, it relies less on the microscope's performance, such as the camera sensitivity and fluorescence source power. This advantage of NCAM can facilitate the relative quantification of NCAM in cells treated with various substances. A series of IF images of NCAM on hCECs treated with TGF-β1, 2, or 3, with or without the neutralizing antibody is shown in Supplementary Data S2. The results confirmed the reliability of NCAM to highlight the induction of EndMT. TGF-β1 and 3 Induced EndMT and Lower ECD on Cultured hCECs in Presence of FBS In the primary culture of hCECs, FBS is necessary for cell expansion. To determine the impact of the three TGF-β isoforms on hCECs cultured in a basic growth medium (BGM) containing 8% FBS, we seeded hCECs at a density of 500 cells/mm 2 in 384-well culture plates. The cells were cultured either with BGM alone as a control or with 10 ng/mL of TGF-β1, TGF-β2, or TGF-β3, in the presence or absence of 10 µg/mL of a neutralizing antibody against TGF-βs. After four weeks of culture, we evaluated the hCECs using IF, focusing on two main criteria, the ECD and EndMT, which was indicated by the fluorescence intensity of NCAM. IF images were acquired using the X10 and X40 objectives and are shown in Figure 3A. In cells cultured in the BGM-based medium supplemented with 8% FBS, the addition of TGF-β1 or 3 led to a significant decrease in ECD compared to the control (BGM alone), while TGF-β2 had no significant effect. The addition of the neutralizing antibody increased the ECD significantly compared to the control ( Figure 3B). Additionally, the fluorescence intensity of NCAM decreased upon the addition of TGF-β1 or 3, indicating the induction of EndMT ( Figure 3C). The Assessment of Inhibitors of TGF-β Signaling Pathways Revealed Positive Effects in Inhibiting P38-MAPK, ROCK, and TGF-β Receptor One of the main goals of this study was to provide recommendations for the optimization of the primary culture of hCECs, which requires the presence of FBS. Therefore, we conducted this experiment using a culture medium containing FBS. We seeded hCECs at a density of 500 cells/mm 2 in 384-well culture plates, using either BGM alone as a control or one of the two different inhibitors tested for each TGF-β signaling pathway. After four weeks of culture, we assessed the effects of TGF-β on the hCECs using immunofluorescence (IF), with a focus on two main criteria: the ECD and EndMT (NCAM fluorescence intensity). The cells were exposed to the different test molecules continuously, from seeding to confluence. IF images illustrating the cell size and morphology, which is also a criterion used to assess cell quality, were acquired under a X40 objective and are shown in Figure 4A. Cells treated with both inhibitors of the Rho/Rock pathway (Y27632 and Ripasudil), P38 MAPK pathway (SB203580 and SB202191), and TGF-β receptor (SB431542 and LY2109761) showed a visible improvement in cell morphology by exhibiting more hexagonal and regular cell shapes. Quantitative results indicated that the ECD was significantly higher in cells treated with both inhibitors of the Rho/Rock and P38 MAPK pathways, whereas it was significantly lower with the two inhibitors of the Ras/Raf/MEK/ERK pathway (U0126 and AZD0364) ( Figure 4B). Moreover, both inhibitors of the P38 MAPK pathway demonstrated a significantly higher ECD compared to the Rho/Rock pathway inhibitors. The anti-EndMT effect, reflected by the increase in NCAM fluorescence intensity, showed an improvement in cells treated with both inhibitors of the TGF-β receptor ( Figure 4C). These results are summarized in Figure 5, which only considers when both inhibitors of the same signaling pathway showed corresponding and significant effects. TGF-β1 and 3 Induced EndMT and Lower ECD on Cultured hCECs in Presence of FBS In the primary culture of hCECs, FBS is necessary for cell expansion. To determine the impact of the three TGF-β isoforms on hCECs cultured in a basic growth medium (BGM) containing 8% FBS, we seeded hCECs at a density of 500 cells/mm 2 in 384-well culture plates. The cells were cultured either with BGM alone as a control or with 10 ng/mL of TGF-β1, TGF-β2, or TGF-β3, in the presence or absence of 10 µg/mL of a neutralizing antibody against TGF-βs. After four weeks of culture, we evaluated the hCECs using IF, focusing on two main criteria, the ECD and EndMT, which was indicated by the fluorescence intensity of NCAM. IF images were acquired using the X10 and X40 objectives and are shown in Figure 3A. In cells cultured in the BGM-based medium supplemented with 8% FBS, the addition of TGF-β1 or 3 led to a significant decrease in ECD compared to the control (BGM alone), while TGF-β2 had no significant effect. The addition of the neutralizing antibody increased the ECD significantly compared to the control ( Figure 3B). Additionally, the fluorescence intensity of NCAM decreased upon the addition of TGF-β1 or 3, indicating the induction of EndMT ( Figure 3C). Figure 3. Effects of TGF β 1, 2, 3 and neutralizing antibody anti-TGF-βs on hCECs cultivated in presence of FBS. After cell seeding in 384-well plates, the hCECs were cultured in culture medium based on BGM that contained 8% FBS for 4 weeks. BGM only (control) or BGM containing 10 ng/mL of TGF-β1, 2, or 3 or 10 µg/mL of neutralizing antibody was applied. (A) Illustrative images. Cell lateral membranes were stained in green by NCAM and the nuclei were counter-stained in blue using DAPI. A higher magnification (X40) was added in order to show the cell morphology. The images were taken in the center of each well. Images of the same magnification were acquired with the same parameters. The scale bar was 200 µm for X10 images and 50 µm for X40 images. (B) Assessment of standardized ECD. ECD was assessed by nuclei counting using DAPI staining. The ECD data were standardized by calculating the ratio with controls. (C) Assessment of EndMT by normalized fluorescence intensity of NAM. The IF images of NCAM were acquired with an FITC filter using the same parameters. Normalized fluorescence intensity represents the fluorescence intensity per cell/cell perimeter as described in the Materials and Methods. The data were also standardized by calculating the ratio with the control. For B and C, at least two cultures from different donors were used (n ≥ 16 wells). Significant differences by comparison to controls were denoted by red stars on the corresponding bars: *** p < 0.0001, ** p < 0.001, * p < 0.005. After cell seeding in 384-well plates, the hCECs were cultured in culture medium based on BGM that contained 8% FBS for 4 weeks. BGM only (control) or BGM containing one of the inhibitors of the TGF-β signaling pathways was assessed. (A) Illustrative images. Cell lateral membranes were stained in green by NCAM and the nuclei were counter-stained in blue using DAPI. The images were taken in the center of each well. Objective X40, scale bar = 50 µm. (B) Assessment of standardized ECD. ECD was assessed by nuclei counting using DAPI staining. The ECD data were standardized by calculating the ratio with controls. At least four cell cultures from different donors were used (n ≥ 32 wells). (C) Assessment of EndMT by normalized fluorescence intensity of NAM. The IF images of NCAM were acquired with an FITC filter using the same parameters. Normalized fluorescence intensity represents the fluorescence intensity per cell/cell perimeter. The data were also standardized by calculating the ratio with the control. At least four cell cultures from different donors were used (n ≥ 8 wells). Significant differences by comparison to controls were denoted by red stars on the corresponding bars: *** p < 0.0001, ** p < 0.001, * p < 0.005. Figure 5. Summary of the effects of inhibiting TGF-β signaling pathways on hCECs in vitro. The figure illustrates the main TGF-β signaling pathways. The names of the inhibitors are indicated inside rectangles. The pink and blue squares summarize the inhibitory effects on ECD or anti-EndMT. "+" indicates a positive effect (significantly different from the control) for both tested inhibitors of the same signaling pathway, "±" indicates that at least one of the two tested inhibitors did not show a significant difference, and "−" indicates a negative effect (significantly different from control) for both inhibitors of the same signaling pathway. Supplementary Data S1. Process of analyzing the EndMT by cell shape. Effects of TGF-β 1, 2, and 3 on In Vitro hCECs Previous studies typically focused on only one of the three isoforms of TGF-β in investigating its effects on CECs, with TGF-β2 being the most commonly studied [26,[43][44][45][46]. Our findings, however, demonstrate significant variations among the three isoforms, with TGF-β2 exhibiting a less severe negative impact on hCECs than the other two isoforms. This emphasizes the need to carefully consider the specific isoform being utilized in future studies. The high-affinity human monoclonal antibody (Fresolimumab) that we utilized in our study functions to neutralize the active forms of human TGF-β1, 2, and 3 [47,48]. Nevertheless, the efficacy of this antibody against all three isoforms of TGF-β remains undetermined. Interestingly, our observations revealed that the neutralizing antibody effectively reduced the EndMT effect induced by TGF-β1 and 2, but only partially attenuated "+" indicates a positive effect (significantly different from the control) for both tested inhibitors of the same signaling pathway, "±" indicates that at least one of the two tested inhibitors did not show a significant difference, and "−" indicates a negative effect (significantly different from control) for both inhibitors of the same signaling pathway. Supplementary Data S1. Process of analyzing the EndMT by cell shape. Effects of TGF-β 1, 2, and 3 on In Vitro hCECs Previous studies typically focused on only one of the three isoforms of TGF-β in investigating its effects on CECs, with TGF-β2 being the most commonly studied [26,[43][44][45][46]. Our findings, however, demonstrate significant variations among the three isoforms, with TGF-β2 exhibiting a less severe negative impact on hCECs than the other two isoforms. This emphasizes the need to carefully consider the specific isoform being utilized in future studies. The high-affinity human monoclonal antibody (Fresolimumab) that we utilized in our study functions to neutralize the active forms of human TGF-β1, 2, and 3 [47,48]. Nevertheless, the efficacy of this antibody against all three isoforms of TGF-β remains undetermined. Interestingly, our observations revealed that the neutralizing antibody effectively reduced the EndMT effect induced by TGF-β1 and 2, but only partially attenuated the EndMT effect induced by TGF-β3. This suggests that the neutralizing antibody may have variable efficacy against the different isoforms of TGF-β, with relatively lower efficacy against TGF-β3. In the absence of FBS, TGF-β1, 2, and 3 induced EndMT on confluent hCECs within one week, resulting in a decrease in NCAM's fluorescence intensity of 60%, 36%, and 60% in confluent hCECs. In contrast, in the presence of FBS, TGF-β1, 2, and 3 induced EndMT within four weeks, leading to a decrease in NCAM's fluorescence intensity of 40%, 22%, and 33% during cell expansion. Our hypothesis is that FBS may serve as a buffer against the effects of TGF-βs. FBS contains a variety of growth factors and hormones that are capable of modulating the TGF-β signaling pathways. Some of these factors may counteract the pro-EndMT effect of TGF-β. Additionally, FBS can provide essential nutrients and antioxidants that protect cells from stress-induced EndMT [49]. We confirmed that TGF-βs, especially TGF-β1 and 3, cause a significant decrease in ECD, which is another important criterion in evaluating the quality of in-vitro-cultured hCECs. This could be partially explained by the decrease in the proliferation of hCECs [22,23]. Eliminating TGF-βs using a neutralizing antibody from FBS can lead to a significant increase in ECD, providing further evidence of the harmful impact of TGF-βs on the ECD. Conversely, beneficial effects of TGF-β have also been reported. When added to confluent cells cultured in a maturation medium, TGF-β1 can enhance the endothelial phenotype, in contrast to its induction of EndMT when added to cells grown in a proliferation medium [50]. Similarly, the improvement in the endothelial phenotype exerted by TGF-β2 in the same maturation medium was even greater than that of TGF-β1 [51]. We found some differences between the maturation medium and our medium without FBS. Specifically, the previous maturation medium contained FBS but lacked the addition of ascorbic acid (an antioxidant) and had a lower concentration of TGF-βs (2 ng/mL compared to our 10 ng/mL). As explained in the Discussion section, FBS may serve as a buffer against the pro-EndMT effect of TGF-βs. Furthermore, our ongoing study suggests that this buffering effect of FBS is more prominent in the ex vivo endothelium, where cells are in a confluent state. Moreover, low concentrations of TGF-β (1 ng/mL) are known to protect human trabecular meshwork cells from oxidative-stress-induced damage through the balance of p-AKT signaling [52]. One of the TGF-β signaling pathways, PI3K/AKT/mTOR, can play an antioxidant role [53,54], and it is known that oxidative stress is a critical factor in EMT engagement [49]. Referring to the trabecular meshwork cells, which are neighboring cells of hCECs, we speculate that low concentrations of TGF-βs could protect hCECs from EndMT through the antioxidant effect via the PI3K/AKT/mTOR pathway. Additionally, hCECs treated in maturation medium containing TGF-β at 2 ng/mL also showed stronger activation of the AKT pathway [51]. Proposal of a Screening System for Molecules to Evaluate Their Effects on hCECs In Vitro Obtaining a large number of hCECs from primary cultures, particularly from the corneas of donors over 40 years of age, is a significant challenge since clinical-grade hCECs have been obtained to date only with donors below 30 years of age [7]. Screening various molecules, such as different growth factors, cell signaling pathway inhibitors, and coating molecules, can be helpful in identifying methods to optimize hCEC cultures. In this study, we present a simple method to screen molecules on hCECs cultured in 384-well plates using IF of NCAM and DAPI. Unlike phase-contrast observation, which remains difficult in small wells due to light reflection in wells' walls, the small surface area of 10 mm 2 per well in 384-well plates allowed us to minimize the quantity of cells without negatively impacting the IF observation. The two main criteria, the ECD and EndMT, can be easily and reliably assessed by counting the DAPI-stained nuclei and measuring the fluorescence intensity of NCAM. Counting and measurement can be performed using the free software ImageJ, which is a user-friendly tool for image analysis. Our proposed formula for the normalization of the NCAM fluorescence intensity to the ECD provides a reliable method to assess EndMT in hCECs. Moreover, the staining of NCAM alone can serve as visible evidence to assess the quality of CECs. The perfectly drawn morphology of CECs, as shown by NCAM, is also an important indicator in evaluating cell quality. Finally, it is important to note that having a common control is essential for each experiment and cell culture, as it allows for an accurate comparison between them. The differentiation status of hCECs is rather complex to assess. It is known that hCECs readily undergo phenotypic transformation into fibroblasts by EndMT [14]. We therefore set up a simple method to quantitatively assess the differentiation and EndMT status of hCECs. Six biomarkers were firstly preselected. NCAM [55][56][57], CD166 [58,59], and Na + /K + ATPase [11,58] are specific biomarkers for hCECs, and CD73 [59], TGFBI, and Col5A1 are potential biomarkers to assess EndMT. ZO-1 and N-cadherin, which are two other wellrecognized, specific markers for hCECs [11,58], were also tested but were eliminated due to their low fluorescence intensity in cultured hCECs. SLC4A4 is frequently used as a specific biomarker for mRNA analysis [60,61], but its use in IF is still being discussed [58]. Vimentin, N-cadherin, collagen I, and fibronectin are well-known biomarkers for EMT but have not been validated for EndMT in hCECs. Vimentin and N-cadherin were ubiquitously expressed by normal hCECs [58], and the obvious immunostaining of collagen I and fibronectin was found in cultured hCECs without EndMT. CD73 is a fibroblast biomarker and has been used to assess EndMT status for hCECs [62], while Col 5A1 and TGFBI have been reported as markers of EMT [63][64][65]. The three biomarkers have been previously tested in our laboratory and have shown promising results, making them interesting candidates for this study. Among these six preselected markers (NCAM, CD166, Na + /K + ATPase, CD73, TGFBI, and Col5A1), NCAM was chosen for its high fluorescence intensity in normal hCECs and high sensitivity to EndMT, making it the most suitable biomarker to quantify EndMT by measuring its fluorescence intensity. Moreover, NCAM is a lateral membrane marker that delineates the cell morphology, which is an additional criterion in characterizing the differentiation status of hCECs [58]. Effects of Inhibiting Various TGF-β Signaling Pathways on Cultured hCECs While previous studies have reported the positive effects of individual inhibitors such as Y27632 (a ROCK inhibitor), SB203580 (a P38 MAPK inhibitor), and SB431542 (a TGF-β receptor inhibitor) on cultured CECs, a comparison of their effects is currently lacking. Furthermore, there is currently no research examining the effects of inhibiting different TGF-β cell signaling pathways. In this study, we targeted the six main intracellular signaling pathways [66][67][68][69][70] and the TGF-β receptor using their inhibitors, including Y27632, SB203580, and SB431542. The ROCK inhibitor Y27632 is the most extensively studied molecule for hCECs and is considered as the critical factor behind the success of the first cell therapy to treat corneal endothelial diseases [7]. Its multiple biological benefits on CECs, promoting cell adherence, proliferation, and survival [71][72][73], have been reported. In this study, we observed that Y27632 had a dual positive effect on the ECD and anti-EndMT. The promotion of cell proliferation may explain its impact on the increase in the ECD. Observations of anti-EMT in epithelial cells [74,75] and the indication of anti-EndMT in hCECs [76] are consistent with our study. The positive effects of another ROCK inhibitor, Ripasudil (approved for clinical administration [77,78]), were inferior to those of Y27632. The TGF-β receptor inhibitors, SB431542 and LY2109761, have both shown an anti-EndMT effect on hCECs. This anti-EndMT effect is consistent with the literature [27,28]. In addition to the anti-EndMT effect, LY2109761 improved the ECD, with a significant increase of 23% compared to the control. These two inhibitors are theoretically involved in the total inhibition of the TGF-β signaling pathway. The suppression of TGF-βs by their neutralizing antibody is also a method of total inhibition of the signaling pathway. It has shown a more beneficial effect on the ECD than on anti-EndMT. The non-completely specific effect of the inhibitors and the complexity of intracellular signaling pathways could be the causes. Compared to the two chemical inhibitors, the suppression of TGF-βs by their neutralizing antibody remains economically infeasible for the mass production of hCECs. Both P38 MAPK inhibitors, SB203580 and SB202191, demonstrated excellent efficacy in increasing the ECD by 59% and 65%, respectively, while also maintaining the differentiated state of hCECs. SB203580 was reported for its protection of CECs' barrier function in vitro and ex vivo [79,80] and its numerous beneficial effects on hCEC culture through antisenescence mechanisms [38]. The two Inhibitors targeting the Ras/Raf/MEK/ERK pathways exhibited a negative effect on the ECD. These pathways are known to stimulate the growth of epidermal and epithelial cells, and our study highlights their importance for in vitro hCEC growth. This observation also demonstrates the complexity of the biological effects of TGF-β, which can promote the proliferation of certain cell types [81,82]. Limitations and Perspectives In this study, we demonstrated that NCAM served as a reliable biomarker to assess EndMT induction due to its sensitivity, as evidenced by a significant decrease in fluorescence intensity. It is important to note that this decline in fluorescence intensity indicates a reduction in protein expression, which need be further confirmed through a series of Western blotting assays. Because the number of primary hCECs was too small to perform both IF and Western blotting for each target, we chose to develop an IF-based analysis method in this study. We employed two methods to minimize potential errors: (1) we utilized two different inhibitors that targeted the same signaling pathway; (2) the selection of molecule concentrations was guided by the relevant literature, with priority given to studies involving CECs, as indicated by the references cited in Table 1. In the field of corneal endothelia, there is an urgent need for a screening tool capable of evaluating a large number of molecules and optimizing the primary culture of hCECs. The proposed method aims to address this need. However, it is important to note that our approach has certain limitations that should be considered. In this study, the effects of the various inhibitors were only observed in a single passage of the cells. To ensure their potential clinical application, it will be necessary to evaluate the stability of cells treated with these drugs/inhibitors over time, starting from the P0 culture passage. Bonanno's team suggested that lactate flux is a component of the corneal endothelial pump [83][84][85]. It would be highly intriguing to investigate whether there exists a correlation between endothelial lactate efflux function and EndMT in hCECs. The IF tool and TGF-β-induced EndMT models presented in this study could be used to conduct such a correlational study. The team of Lee and Kay has extensively studied the effect of FGF-2 in EndMT on CECs in various publications [86][87][88][89]. It will be very interesting to exploit the different FGF-2 signaling pathways with the molecular screening tool set up in this study in order to identify other inhibitors that may have positive effects on CEC culture. In addition, other growth factors or cytokines known to have EMT effects, such as MCP-1 [90,91], TNF-α [92,93], and IL-1β [94,95], and their signaling pathways, could also be potential targets to study the optimization of the primary culture of hCECs. Conclusions In-vitro-cultured hCECs are susceptible to the negative effects of TGF-βs, particularly TGF-β1 and 3, which can induce EndMT and decrease the ECD. TGF-β receptor inhibition had an anti-EndMT effect. Inhibition of the ROCK pathway, notably that of the P38 MAPK pathway, increased the ECD, while inhibition of the ERK pathway decreased the ECD. For molecular screening studies, we recommend using the method presented in this study based on the IF of NCAM/DAPI.
2023-06-29T05:08:01.521Z
2023-06-01T00:00:00.000
{ "year": 2023, "sha1": "476b3da6c363fe0b3a6da83e043ff579e64897ab", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/cells12121624", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "476b3da6c363fe0b3a6da83e043ff579e64897ab", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232308147
pes2o/s2orc
v3-fos-license
A Tale of Two Meckel’s: small bowel obstruction secondary to Meckel’s Diverticulum Abstract We present two rare cases of small bowel obstruction (SBO) secondary to Meckel’s diverticulum (MD) where the mechanism of obstruction was not readily apparent. Both were cases of virgin abdomen with pre-operative CT scans demonstrating SBO without a clear underlying cause or mass. Diagnostic laparoscopy was performed, which established the underlying cause to be MD, and laparoscopic-assisted resection was undertaken to resect small bowel and perform a side-to-side stapled anastomosis. We subsequently describe the different mechanisms by which MD can cause obstruction as described in the literature. INTRODUCTION Meckel's diverticulum (MD) is uncommon as a finding, occurring in 2% of the population, while complicated or symptomatic cases of MD are even more rarely encountered. The rule of two's succinctly summarizes the key features of this embryological variant: ∼2% prevalence, 2 inches in length, 2 feet proximal to the ileocaecal valve located on the antemesenteric border of the ileum, some (15-50%) containing 2 types of heterotopic mucosa (gastric or pancreatic) and most cases presenting before the age of 2 [1,2]. We describe two cases of MD causing small bowel obstruction (SBO) in two adults with a virgin abdomen where the mechanism of obstruction was not readily apparent. Additionally, of note, these two patients presented over 2 consecutive days at the same institution, a coincidental adherence to the oftquoted 'rule of twos'. CASE REPORT Case 1 is a 24-year-old male with a virgin abdomen who presented with a 3-day history of crampy abdominal pain and vomiting. He was haemodynamically normal and afebrile. He located in the right pelvis and a small amount of free fluid (Fig. 1). The appendix was identified as normal. On Day 0 of his admission, he underwent a diagnostic laparoscopy. Intraoperatively, an MD was found at the site of the transition point between small bowel dilated proximally and collapsed distally (Fig. 2). Enteric contents were thickened raising the possibility of a faecolith. Macroscopically, the MD appeared to be normal, with no features of intussusception, volvulus or inflammation at the site. The MD was exteriorized through a mini-laparotomy and small bowel resection with a side-to-side stapled anastomosis was performed. Histopathology revealed MD with no features of inflammation or ectopic mucosa. Case 2 is a 56-year-old male, with a virgin abdomen, who had 2 days of crampy abdominal pain, vomiting and obstipation. He was haemodynamically normal and afebrile. He had a distended abdomen with generalized tenderness. Blood tests showed an elevated lactate of 3 and WCC of 11.7 × 10 9 . CT demonstrated distal SBO, with distension of small bowel up to 6 cm (Fig. 3). He underwent a diagnostic laparoscopy with an identification of MD at the transition point, subsequently exteriorized through a mini-laparotomy (Fig. 2). The apex of the MD was tethered to the mesentery through a band containing the diverticular blood supply. Small bowel resection and anastomosis was performed. Histopathology showed MD with acute inflammation, haemorrhage and necrosis, and no ectopic tissue. DISCUSSION MD is an example of a true diverticulum, encompassing all layers of the small bowel wall, with a blood supply derived from a terminal branch of the superior mesenteric artery [3]. While initially reported by German surgeon Wilhelm Fabricus Hildenus in 1598, MD is eponymous for German anatomist Johann Friedrich Meckel the Younger, who first described its embryogenesis in 1809 [1]. MD arises due to the failure of closure of the omphalomesenteric duct during the 5th to 7th weeks of development [4]. Complicated MD occurs in 4% of cases and associated features include age less than 50, male sex, abnormal mucosa and length >2 cm [5]. Complications include obstruction, inflammation, perforation, bleeding or malignancy [1,5]. Multiple mechanisms may precipitate obstruction, with the most common events being volvulus or intussusception [1,5]. The presence of embryonic bands attaching the MD to the umbilicus or to the mesentery is commonly implicated. An omphalomesenteric band, which connects the MD to the umbilicus, can lead to volvulus or entrapment of bowel. A mesodiverticular band is one that is attached to the diverticulum and ileal mesentery and may directly compress the ileum or create an opening for internal herniation. Intussusception can also occur, particularly when the diverticulum is short and thickened with inflammation, ectopic tissue or tumour, acting as a lead point [3]. Other mechanisms include previous diverticulitis episodes causing band adhesions, acid secretion by ectopic mucosa leading to luminal stenosis, occlusion by faecoliths, enteroliths or bezoars, or incarceration within an inguinal hernia called Littre's hernia [1,6]. The diagnosis of MD as the cause of SBO is often not made until the operation. CT is very accurate in identifying an obstruction, although it has poor sensitivity and specificity in detecting MD. For instance, pre-operative diagnosis with CT was made in only 50% in a recent case series of MD causing SBO. Radiological features suggesting this diagnosis include dilated small bowel loops with a transition point at or near midline, presence of a blind-ending tubular pouching of the distal ileum located at the terminal branch of the SMA with or without inflammation or enterolith. Relevant negative findings include a normal appendix and no previous surgery [7]. In both cases, diagnostic laparoscopy was used to establish and treat the cause of SBO. Unlike cases described in the literature where a specific mechanism is associated with MD causing obstruction, in the two cases we present, the exact mechanism was not directly observed. In case 1, the enteric contents were quite thickened; the possibility of an enterolith could not be excluded, although this was not demonstrated on CT scan. The microscopic diagnosis was also not remarkable, and therefore, the sequence of events leading to obstruction with the transition point is unclear. In the second case, it appeared that a band or adhesions caused tethering of the apex of the MD to the ileal mesentery, which served as the site of the transition point. There was inflammation noted on histopathology; therefore, it is likely that this contributed to the obstruction, although this could be a secondary phenomenon if the adhesions caused volvulus leading to obstruction, as described in another case [8].
2021-03-23T05:19:53.532Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "4d3d10faaaab1ea7835d872ccc6632d0f4ff84c6", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1093/jscr/rjab037", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4d3d10faaaab1ea7835d872ccc6632d0f4ff84c6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
80427074
pes2o/s2orc
v3-fos-license
Wilson Quad Helix Expansion & Uncommon Extraction of Mandibular Incisor : A Case Report The most important question to the clinician is whether it is warranted to extract a single lower incisor in borderline cases. In the present case report, in order to resolve the minor crowding, a lingually placed lower incisor has been extracted and also the soft tissue profile was maintained. Lower incisor extraction is indicated in the carefully selected cases, especially where space requirement do not call for greater dento-alveolar movement. INTRODUCTION The decision to extract permanent teeth as aid in resolving arch length deficiencies presents a challenge to the orthodontist.Few patients are not ideal either for extraction or non extraction 1 .Hahn in 1942 advocated the removal of a mandibular incisor to close the extraction space and thus reduce the anterior crowding 2 .As pointed out by Kokich and Shapiro (1984), the deliberate extraction of a lower incisor in certain cases allows the orthodontist to improve occlusion and dental aesthetics 2 . Tooth-size and arch-length discrepancy, or arch crowding has traditionally been managed by means of first or second premolar extraction.First or second molar extraction is a less common approach.Incisor extraction is another alternative in the mandibular arch.In 1905, Jackson described a case in which two lower incisors were extracted at different times to relieve mandibular crowding 3 .According to Proffit, mandibular incisor extraction comprised 20% of all the orthodontic extraction cases in 1950s, but was rarely use thereafter. Case report A 14 years old female reported with the chief complaint of irregular teeth.The patient's past Medical and Dental history were not contributory.The patient presented with a Orthognathic facial profile, Incompetent lips, Average Mandibular Facial Height (fig. 1) Intra oral examination revealed Angle's Class I molar relationship with mild crowing, constricted upper arch, overjet of 3mm, overbite of 3mm and lingually inclined lower first molars, and average curve of spee. Carey's and Arch Perimeter analysis indicates a tooth size-arch length discrepancy of 0.4mm in the maxillary arch and 4mm in the mandibular arch.Bolton's analysis indicated mandibular anterior tooth material excess of 4mm. The treatment objectives were to Eliminate crowding in Upper and Lower arches, Upper arch expansion, Uprighting the lower molars and maintaining the Overjet and Overbite, and the acceptable facial profile.Considering the above treatment objectives, it was planned to extract the mandibular right lateral incisor, which would help resolve the lower anterior crowding while maintaining the patient's soft tissue profile. Treatment progress Initially Quad helix was given during the start of the treatment.Quad helix was activated once a month for a period of 4 months.Expansion took place in the upper arch after 4 months.0.022-inch slot MBT brackets were bonded, 0.014 NiTi followed by 0.016 NiTi arch wires were given for aligning the upper and lower arches.After the alignment, the extraction of 42 was done, followed by 0.016 x 0.022 NiTi arch wires for 1 month.Later, 0.017 x 0.025 NiTi arch wires were given in Upper and Lower archers. After placement of these arch wires Quad helix was removed, and Upper arch was consolidated.Lingual buttons were welded to 36 and 46 molar bands and Red cross-elastics were given for correction of lingually inclined molars (36 and 46) along with 0.016 Australian arch wires in Upper and Lower arches for a period of 2 months.After this period of 2 months, the lower molars were uprighted.Later on 0.017 x 0.025 NiTi wires placed in upper and lower arches followed by 0.017 x 0.025 SS wires which was followed by 0.019 x 0.025 SS archwires in upper and lower arches. Factors to be considered regarding choice of extraction are 6 Amount of tooth size and arch size deficiency; Amount of anterior tooth ratio; Periodontal conditions; and Upper and Lower midline relationships. Approximately 80% of orthodontic patients need arch expansion in cases of narrow maxilla 7 According to the literature, maxillary expansion can be done in two procedures.The first, Rapid Maxillary Expansion (RME), can be done by using an appliance that incorporates a screw, for example a Hass or Hyrax.The second is a slow maxillary expansion group which includes removable expansion plates, Porter W arch, and Quad Helix. The Quad-Helix was developed in 1975 by Robert Murray Ricketts from Porter's "W" arch, adding four loops to the appliance, increasing the wire length on 40 to 50mm.The objective was to reduce the forces and better molar control 8 .Several authors have written that the Quad-Helix appliance can deliver sufficient forces to promote skeletal changes on maxillary bone in younger patients 9-18 .Some of the authors like Zachrisson in 1990 AJO, Hopkins in 1977 AJO, Fusher, Schwartz, Wits, Kokich and Shapiro also have studied the effects of incisor extractions. CONCLUSION inclined lower molars with cross elastics. Fig. 1 :Fig. 2 : Fig. 1: Pre treatment extra oral photos Jackson was the first to advocate extraction of lower incisor to relieve crowding.Extraction of mandibular incisor is a logical alternative that may improve the dental occlusion and dental aesthetics, and may allow the stability in the mandibular anterior region.A careful case selection is necessary for an extraction of incisor.This patient reported with moderate overjet and overbite, lingually placed lower incisor, lingually inclined lower molars with acceptable soft tissue profile.Careful diagnosis is necessary to analyze the treatment goal and outcome.
2018-07-25T06:19:44.098Z
2017-09-25T00:00:00.000
{ "year": 2017, "sha1": "77c7f9b155e5eafe98fd29fe5c9412d312817202", "oa_license": "CCBY", "oa_url": "https://doi.org/10.13005/bpj/1242", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "77c7f9b155e5eafe98fd29fe5c9412d312817202", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247500458
pes2o/s2orc
v3-fos-license
Signal-to-Noise Ratio Based Fault Detection and Identification In this work, we introduce signal-to-noise ratio (SNR) based fault detection and identification mechanisms for a networked control system feedback loop, where the network component is represented by an additive white noise (AWN) channel. The SNR approach is known to be a steady-state analysis and design tool, thus we first introduce a finite time approximation for the estimated AWN channel SNR. We then consider the case of a general linear time-invariant plant model with one unstable pole. The potential faults that we discuss here cover simultaneously the plant model gain and/or the unstable pole. The fault detection is performed relative to the estimated AWN channel SNR. The fault identification is performed using recursive least squares ideas and then further validated with the observed SNR value, when a fault has been previously detected. We show that the proposed SNR-based fault mechanism (fault detection plus fault identification) is capable of processing the proposed faults. We conclude discussing future research based on the contributions exposed in the present work. INTRODUCTION Control theory, from the 20th century up to the 21st century, moved from what is known as classic control into new research areas such as networked control systems (NCSs). Theory and practice experts have been very busy (Chen et al., 2011), since results in NCSs are intrinsically multidisciplinary by definition, for example, by considering simultaneously established results in control and also information theory (Nair and Evans, 2004;Martins and Dahleh, 2008). Other examples joined linear optimal control results together with communication theory results (Elia, 2004;Braslavsky et al., 2007;Rojas, 2012) for additive white noise (AWN) channels. Similarly, an optimal approach for output tracking control over erasure channels has been proposed for stability and subject to model uncertainties in Jiang et al. (2021). In more recent years, we have also seen an increase in results that involve event-triggered NCS controllers (Heemels et al., 2012;da Silva et al., 2014;Campos-Delgado et al., 2015) which attempt to use limited available communication and energy resources with paucity, and nevertheless achieve a set of given goals, be those goals stability, performance, or robustness. These and other NCS results can constitute the foundation for a better control practice in the near future. An NCS result, contributed early on in Braslavsky et al. (2007), imposes a channel input power constraint P for an AWN channel in a control feedback loop and then characterizes the infimal channel signal-to-noise ratio (SNR) in terms of the plant model features (unstable poles, nonminimum phase zeros, etc.). The resulting SNR expression can then be used to revisit the control feedback loop stability in terms of an SNR limitation, in particular when the controlled plant model is unstable. Specifically, the SNR fundamental limitation expressions contributed in Braslavsky et al. (2007) deal with unstable single-input single-output (SISO) linear time-invariant (LTI) plant models, both in continuous time and discrete time, characterizing the infimal channel input SNR bound required to achieve control feedback loop stability. A large body of contributions also exist on the topic of fault detection and identification, with many books already written on these topics (Gertler, 1998;Chen and Patton, 1999;Blanke et al., 2003;Isermann, 2006;Saberi et al., 2007;Varga, 2017), together with informative review articles such as Ding et al. (2000) and Saberi et al. (2000). A fault is usually defined as an abnormal behavior occurring in a process, which in turn is of interest to first detect, identify, and then (ideally) properly recover from. There are different formulations for the problem of fault detection for LTI systems, which can be roughly categorized as approximate (such as the synthesis of fault detection filters subject to noise) and exact formulations (such as the nullspace method). The variability inherent in NCSs might also be caused by anomalous variations in the plant model. An NCS example of the proposed setup is presented in Figure 1, where in this article we have considered a memoryless AWN communication channel in place of the communication network, specifically over the feedback path. These anomalous variations can be given the interpretation of faults, thus the need to develop a fault mechanism to detect and identify them ( Figure 1) to later on inform a possible controller adaptation in order to achieve what is known as a fault-tolerant control feedback loop. Ding (2012) contributed a survey on NCS fault detection and fault-tolerant control. Another review on fault diagnosis for NCS can be found in Aubrun et al. (2008) with the objective of reducing performance degradation due to the different NCS communication features. A dynamic observer is designed for sensor fault detection under finite frequency disturbance and noise in a linear NCS (Dai et al., 2021). In Ren et al. (2018), an event-triggered H-infinity fault detection filter has been contributed in order to reduce unnecessary communication in the NCS dominated by time-varying latency and fading phenomena. A Bayesian approach, on the other hand, is the basis in Lami et al. (2020) for a fault detection proposal, in the context of an NCS irrigation canal application, while Li et al. (2009) use a Markov jumping linear system (MJLS) approach to define their residual generator. An NCS robust fault-tolerant control feedback loop is designed in Bahreini and Zarei (2019) with faults also modeled as MJLS, but with incomplete transition probabilities knowledge, for which Linear Matrix Inequalities based sufficient conditions are then presented as to ensure stochastic stability. In a multi-agent context, task allocation is proposed in Schenk and Lunze (2020) to achieve fault tolerance through the cooperation between a set of healthy and faulty agents, instead of focusing on recovering nominal performance; see also Wang et al. (2021). A nonlinear model predictive control, subject to random network latencies and random packet dropout phenomena, is used to design a faulttolerant control feedback loop in Wang et al. (2016) based on a predictive observer with guaranteed input-to-state stability. On the other hand, a class of nonlinear NCSs, where the nonlinear terms is modeled using neural networks, has been studied by Ye et al. (2021), and LMIs are used to obtain the fault detection filter gains. Fault detection for nonlinear NCSs subject to random delays has also been considered when using the LMIs by Li et al. (2020), Huang and Pan (2020), and Gu and Yao (2021). Finally, a robust neural network-based controller was designed to detect and mitigate false data injection attacks (which can be interpreted as malicious faults) in Sargolzaei et al. (2020). The current state of the art on fault mechanism designs for NCS still lacks the option of an SNR approach-based fault detection and identification mechanism. We also observe that most of the reported NCS contributions include a communication network simultaneously over the controller- to-plant (C2P) and the plant to controller (P2C) paths ( Figure 1). However, when designing the NCS feedback loop, there is always the potential to collocate either the controller with the sensor devices, thus considering only the C2P path, or the controller with the plant model, and only then dealing with the P2C path for explicit AWN channel location. In this work, we focus on the P2C path, since for fault detection, the C2P path option, or the simultaneous presence of AWN communication channels in both locations, can be addressed in a similar manner. Our first contribution in this article is to establish a fault detection algorithm to determine the occurrence of faults based on an finite time estimated AWN channel SNR. This for a SISO LTI plant model with one unstable pole. Our second contribution is to add to the previous detection algorithm, a fault identification stage using the recursive least square (RLS) algorithm which, upon a fault being flagged, can discriminate faults consistent with the estimated AWN channel SNR. We use examples, when appropriate, to further illustrate the proposed contributions. This article is organized as follows: Section 2 presents the general assumptions, introducing the plant and AWN channel models. We also present here the AWN channel SNR deduction for a control feedback loop. Section 3 addresses the contributions of this work; that is, we define in full the proposed finite time AWN channel SNR estimation, the SNR-based fault detection stage and the fault identification stage for the proposed plant model. In Section 4, we discuss the possible avenues for generalization in future research of the presented results and summarize the present work. METHODS In the following subsection, we proceed to list the assumptions for the present work. Assumptions -LTI plant model: The LTI plant model G(z) is assumed to be an LTI model given by with K ∈ R + , |ρ| > 1 and G s (z) is a known proper, stable rational transfer function. -AWN channel model: The AWN channel model is characterized by its channel input power constraint P and channel additive noise n(k). The channel input power constraint is such that distortionless transmission is achieved both at nominal and faulty conditions. -Channel additive noise process n(k): The channel additive noise process n(k) is assumed, to be in this work, as a zero mean, independent and identically distributed, white noise process. The noise variance σ 2 is assumed to be known. -Reference signal: The reference signal is assumed to be constant and of value r ∈ R. The plant LTI model, AWN channel model, and channel additive noise process assumptions are in line with the SNR approach and can be traced to the seminal work of Braslavsky et al. (2007). The reference signal is adapted from the work of Rojas (2021). Signal-to-Noise Ratio Constrained Control Approach We now proceed to illustrate the SNR constrained control approach. For this, we take the case of r = 0. From Figure 2, we have that the channel input power P is calculated as y 2 Pow ≜ lim k→∞ E y 2 (k) . The power at the plant output y 2 Pow , to guarantee a distortionless transmission, cannot exceed the channel input power constraint P > y 2 Pow . Under stationary condition (see Åström, 1970, §4.2), the channel input power can also be stated as Here T(z) is the complementary sensitivity function feedback loop transfer function with output y(k) and input r(k). We can then restate the channel input power inequality as an SNR inequality by means of the H 2 norm of T(z) where K is the set of stabilizing controllers. From the previous equation, we have that the SNR inequality highlights a lower bound defined over the set of stabilizing controllers K; for example, see Rojas (2012). When the plant model is unstable, this lower bound cannot be zero and thus will become a fundamental limitation for unstable plant models. Finite Time Approximation For designing a fault detection scheme, we cannot in practice guarantee k → ∞ to compute the channel input power definition from the available measurement of y(k). To achieve this, we introduce the following definition: Definition 1. L is the sample length on which the stationarity assumption for the control feedback loop signals (in particular, the channel input signal) holds for a given tolerance value ϵ defined by the user. Based on Definition 1, we then propose an L sample length moving average of the channel input signal y(k) as its finite time approximation version We are then left with appropriately selecting the value of L. For this, we propose to use the averaged signal variance such that σ 2 yL ≤ ϵ for a given tolerance value ϵ defined by the user. To perform this selection of L, we then introduce Algorithm 1. Algorithm 1. Estimation of L. Starting with the initialization stage, Algorithm 1 runs an outer for loop of a simulation based on Figure 2, evaluating the infimal SNR over an AWN channel over the P2C path (using, for example, MATLAB). Then, the inner for loop retrieves the simulated output vector y(k) data to repeat the y L calculation a total of N times over a rolling time-window of selected length L, from Tss + k + 1 to Tss + k + L. T ss is a time value set so as to avoid any initial conditions in transient. The selections for the inner for loop will test the candidate value of L through a specific channel noise realization, when the closed-loop dynamics have settled (by means of T ss ). The outer for loop completes the algorithm by averaging the selection of L through a number of noise realizations determined by the parameter iter and by testing the ϵ stopping condition with the sample variance of y L (k) obtained from the inner loop. If the sample variance fails the test, then we add one to the working value of L and repeat each of the steps. If the sample variance satisfies the ϵ stopping condition, we then output the last working value L as the selected time window length in Eq. 4. RESULTS By considering the value of L settled using Algorithm 1, we now move on into providing a lemma for SNR-based fault detection. Lemma 1. SNR Fault Detection. The Fault Flag (FF) variable is raised to 1 when a fault is detected in an NCS feedback loop as shown in Figure 2; that is, where SNR L (k) is the finite time SNR approximation defined as and Γ o is the nominal theoretical SNR (with no faults), with L chosen using Algorithm 1. In turn, the confidence level C is selected as equal to α times the ratio between σ y , the theoretical stationary standard deviation of the channel input y(k), and σ, the channel noise n(k) standard deviation. The fault detection mechanism proposed in Lemma 1 constitutes the first contribution of the present work. Remark 1. We observe that the proposed SNR fault detection mechanism is transparent to the simultaneous presence of the AWN channels over the C2P and P2C paths. The presence of both AWN channels will result in a different value of Γ o , which is predicted to be (for example, see Rojas, 2013): where T o (z) is the nominal complementary sensitivity (without faults), σ 2 C2P is the C2P additive channel noise variance, and σ 2 P2C is the P2C additive channel noise variance. Pow E{y 2 } σ 2 y + μ 2 y and μ 2 y |T o (1)| 2 2 r 2 , we have σ 2 y T o 2 2 σ 2 . Thus, the confidence level defined in Eq. 7 can be interpreted as the ratio between the feedback loop channel input power part due to the channel noise and the channel noise variance, which can then be rewritten as Remark 3. The value of α, on the other hand, is a design parameter for the fault detection mechanism, highlighting a trade-off between the rate of false positive fault detection (detecting a fault when there is none) and false negative detection (not detecting a fault when there is one). Therefore, the α parameter needs to be selected with care depending on the specific problem. If α is too small, then the noise level will trigger more false positive detections. This could still be traded-off against a larger value of L, but would imply an increased delay in detecting a fault when it occurs, due to the need of averaging a larger number of samples to obtain SNR L . On the other hand, if the value of α is too large, there will be an opposite effect; that is, it would increase the occurrence of false negatives (the claim that there is no fault when there is really one). We would expect that if the expected fault SNR level is big with respect to the nominal SNR level Γ o , then larger values of α could be used because there would be a less likelihood of false negatives. On the other hand, if there exists previous information of a smaller fault SNR level with respect to the nominal SNR level Γ o , then a smaller α should be used, with a lower limit imposed by the presence of the channel noise. Remark 4. We observe that the SNR approach behind Lemma 1 is a stationary approach. As a consequence, assuming stability of the control feedback loop, the same proposed detection mechanism could potentially be applied to nonlinear plant models, since it is well known that a nonlinear state trajectory can be approximated by a linear state trajectory in the vicinity of a stable equilibrium point. We now address, in the next lemma, our second contribution which consists of adding to the previous SNR-based detection algorithm a fault identification stage based on the process identification RLS algorithm. Lemma 2. Fault Identification. Consider that a fault takes place in the plant model Eq. 1 due to a simultaneous change ΔK in the value of K and Δρ in the value of ρ when in feedback loop (Figure 2) with the controller defined as where K i is known as the integral action gain. The above controller is assumed to stabilize the nominal feedback loop and the faulty feedback loop. The fault identification mechanism ( Figure 1) has access to the values of u(k) and y(k) and can identify the fault values ΔK and Δρ, when the FF(k) in (Eq. 5) is 1, by means of a finite memory recursive least square (FM-RLS) algorithm as where u s (k) is the output of the stable part of the plant model, that is, u s (k) = g s (k) p u(k), with g s (k) Z −1 {G s (z)}. Proof. When the AWN channel is over the P2C path, the complementary sensitivity function describes the feedback loop relationship between y(k) and n(k) (bar a negative sign), as well as the feedback loop relationship between y(k) and r(k). Given the proposed controller structure, the theoretical channel SNR will be defined as Due to the channel noise being white, we satisfy the condition of persistent excitation for closed loop identification. In the presence of a fault due to a simultaneous change ΔK and Δρ, if signals y(k) and u(k) are available, then it is a matter of observing the correct regressor matrix to identify the changing values of the parameters K(k) and ρ(k) (time-varying values due to the faults) together with a FM-RLS. The use of process identification methods, such as RLS, for a correct fault identification is suggested, for example, in Isermann (2006, Ch. 9). Observing that G s (z), the stable part of the plant model, is not subject to change, we can then filter u(k) and obtain u s (k) = g s (k) p u(k). The resulting regressor matrix for a FIGURE 3 | Evolution of the estimated value of L as per Algorithm 1. vector parameter θ [ K(k) ρ(k) ] T , with memory length L, is then given by: The estimated parameters vector is thenθ (Φ T L · Φ L ) −1 Φ T L y as in Eq. 11, with y [ y(k − L) . . . y(k) ] T which concludes the proof. The fault identification mechanism proposed in Lemma 2 constitutes the second contribution of the present work. We now illustrate these two mechanisms with an example. Example 1. We proceed with the present example by stating the values for the proposed parameters of this example in Table 1. The parameters values in Table 1 are a representative selection. The greater the values of r, ρ, and K, the greater value of the nominal SNR Γ o . The values of the parameters iter, Tsim, and ϵ related to Algorithm 1 are such that we achieve convergence of L to the value of 92. The plant model parameters G s (z) and controller C(z) are such that we have control feedback loop stability at nominal and faulty conditions. The controller C(z) contains integral action to achieve reference signal tracking. In Figure 3, we have the numerical evaluation for the proposed plant model: The plant model has one unstable pole as to have nominal SNR Γ o greater than one (see Braslavsky et al., 2007 for more details). The other pole and transmission zero are stable. The plant model in Eq. 14 is then put in a feedback loop together with the proposed controller: The proposed controller above is such that it grants tracking of the proposed constant reference signal r = 1 due to the pole at z = 1, as well as invert cancel out the stable part G s (z) of the plant model. As the iterations in Algorithm 1 increase (Figure 3), the L value for the finite time approximation is tested and, failing the comparison with the stop value of ϵ, is then increased to the final selected value of L = 92 after the set of 500 iterations. It is reasonable to assume convergence is achieved, since for the last 300 iterations, the value of L grew less than 10%. Notice from Figure 3 that the value of L is also effectively updated when the variance of y L (k) exceeds the threshold limit defined by ϵ in Table 1(shown by the horizontal orange dashed line). We consider the effect of two faults for the proposed time model, described by the values in Table 2. Observe that the proposed faults focus on ΔK, which can be interpreted as an input fault, and on Δρ, which is a fault that more directly affects the SNR level under faulty conditions. We do not consider here, and leave as future research, the case of fault changes on the stable part G s (z) of the plant model since this would affect the controller stable cancellation of it and might result in an unstable control feedback loop under faulty conditions. The feedback loop starts in the nominal condition until time sample k = 5,000 when the first fault, described by the pair (ΔK 1 , Δρ 1 ), takes place. The first fault ends at time sample k = 10,000, recovering nominal conditions. At time sample k = 15,000, the second fault described by the pair (ΔK 2 , Δρ 2 ) now takes place up to time sample k = 20,000. We then recover again nominal conditions up to time sample 25,000 when the simulation concludes. The theoretical SNR at nominal conditions is 9.9474, whereas the theoretical SNR under the first fault condition is 60.7302, while the theoretical SNR for the second fault is 60.9940. We now consider the application of Lemma 1 to detect the proposed faults. Notice that with G(z) in Eq. 14 and the controller C(z) in Eq. 15, the nominal feedback loop complementary sensitivity T o will have an H 2 norm of 2.9912. Thus, the confidence level is obtained as C 5.982 4. In Figure 4A, we report the estimated finite time SNR L , from an output signal y(k) realization, when the feedback loop is subjected to the two described faults, shown as a blue solid line. The use of Lemma 1 is reported by an orange solid line, and correctly reports the fault occurrences, with a few instances of false negative detections (not detecting a fault when a fault is present) and instances of false positive detections (detecting a fault when no fault is present) at the end of each fault period. To illustrate the trade-off behind the value of α selection, we test the confidence level for different values of α versus false positives and false negative probabilities in Figure 4B. We also report in Figure 4A two examples of how the confidence level affects the two occurrences of false negatives and false positives, for alternative values of α = 1, black solid line, and α = 4, green solid line. As α increases, and thus the confidence level C also increases, the fault positive probability (defined as the ratio between the number of fault positive samples and the number of samples under nominal conditions) decreases, blue solid line in Figure 4B. However, as α increases, and the confidence level C increases, we now have that the fault negative probability (defined as the ratio between the number of fault negative samples and the number of samples under faulty conditions) increases, red dashed line in Figure 4B. Clearly, the best value of α to set the confidence level C is one that trades off the improvement in false positive probability reduction versus false negative probability reduction, which corresponds to the α value when the two lines cross each other. For this example, this value is approximately 2.5. This confirms, a posteriori, that the working choice of α = 2, was reasonable. Once a fault has been detected, we now use Lemma 2 to identify the said faults. In Figure 5, we report the successful identification of both faults using the proposed FM-RLS approach. The result is quite good for both parameters, but it is not instantaneous as the zooming on the right panels in Figure 5 show. In this identification, there is another trade-off between the value of L, the quality of the identification, and the lag in correctly identifying the amount of the fault, ΔK and/or Δρ. The bigger the value of L, the better the quality, the longer the lag, and vice versa. Observe that, after the recursive estimates have settled, the identified fault parameters are the exact values reported in Table 2. Thus, if we were to check the expected SNR under these two identified faults, we would observe values that would be in agreement with the SNR L previously observed in Figure 4A during faults. The successful use of Lemma 2 as reported in Figure 5 is tied to having access to signal u(k) at the process input ( Figure 1). However, the presence of an AWN channel model over the P2C path, as in Figure 2, suggests that signal u(k) can only be available at the same location as signal y(k) (to then inject into the fault identification mechanism) if transmitted by an independent network. This is so because if u(k) was perfectly available at the process output, it would mean that the controller is also collocated with the plant output, and then there would be no real need for an AWN channel model over the feedback loop. We test this in Figure 6, by assuming that u(k) is transmitted through a secondary AWN channel model with independent noise channel with respect to the already stated AWN channel model over the P2C path. The channel noise variance of this secondary AWN channel is assumed to be 2% of σ 2 , and even for this, the application of Lemma 2 considering this transmitted version of u(k) results in far more noisier and biased estimates of the two faults; see the left panels in Figure 6. The bias effect can be ameliorated somehow by evaluating it during the nominal operation (no faults) and then subtracting it from the obtained estimations when the faults are present; see the right panels in Figure 6. We can observe from the same figure for these two faults that ΔK is the fault component that is most affected by the noise presence in u(k). This suggests that to implement a fault identification mechanism based on Lemma 2, the transmission quality for the signal u(k) needs to be in the order of magnitudes better than the ones operating inside the feedback loop. DISCUSSION In this work, we present an SNR-based fault detection and fault identification mechanisms for an NCS feedback loop, when the network is represented by an AWN channel over the P2C path. To the best of the authors' knowledge, the stated contribution is novel in as much that in the current state of the art no fault mechanism designs for NCS uses the SNR approach nor deals with the AWN channel model. The steady-state SNR approach required the introduction of a finite time approximation to estimate the relevant feedback loop signals, which we developed here. We also considered a fairly general LTI plant model containing one unstable pole. The faults that we studied were represented by sudden changes in both the plant model gain and/or the unstable pole values. The fault detection was achieved comparing with Γ o , the AWN channel nominal SNR over the P2C path. The effect of the inclusion of an optimal tracking objective or the potential inclusion of simultaneous channels in the C2P and P2C paths can also readily be included in the proposed SNR of the AWN channel nominal SNR over the P2C value Γ o . On the other hand, the fault identification was performed here using an FM-RLS approach, when a fault has been previously detected. We showed with an example that the proposed SNR-based fault mechanism (fault detection plus fault identification) was capable of processing the proposed faults, with the caveat of almost perfect access to the signal u(k) at the process input. The SNR-based fault detection mechanism was not compared with other NCS-based fault mechanisms because, as far as the authors have surmised from the current state of the art, no other comparable results exist for AWN channel models subject to a power constraint (the core of the SNR approach). There are indeed other fault detection and identification solutions for NCS, as presented in the Introduction, but they focus on different communication features (channel latencies, erasure, fading, etc.). More so, even if a comparison with other NCS fault detection results was feasible, considering the result of Example 1, the fault detection response of our contribution successfully detects the proposed faults, and other methods could only perform equally as good. This is in the on-off nature of the fault detection question, either there is a fault or not, and at best, the answer from any other method would be the same. With respect to the SNR-based fault identification mechanism, the comparison with other methods could indeed be more nuanced, but again considering the results in Figure 6, we obtained an excellent fault identification result when using the FM-RLS approach here, a performance which could only be tied at best by other current NCS fault identification methods if they were actually comparable (which they are not, because they consider different communication network features, than additive channel noise and channel input power constraint). Future research should consider relaxing the requirement on u(k) for fault identification, propose a different fault identification mechanism using perhaps a priori knowledge on the types of faults to be expected, the case of fault changes on the stable part G s (z) of the plant model, and consider the case of other types of communication channel models. For example, if we want to consider optimal output tracking over erasure channels, we can adapt the results in Jiang et al. (2021) to obtain a new analytical expression for a power constraint expression akin to Γ o . DATA AVAILABILITY STATEMENT The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation.
2022-03-18T13:14:45.447Z
2022-03-18T00:00:00.000
{ "year": 2022, "sha1": "bc750291f6e5b720a15d48a57b10e2396645b826", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcteg.2022.806558/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "bc750291f6e5b720a15d48a57b10e2396645b826", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
236531586
pes2o/s2orc
v3-fos-license
Continuous flow left ventricular assist devices do not worsen endothelial function in subjects with chronic heart failure: a pilot study Abstract Aims To evaluate endothelial function in subjects with left ventricular assist devices (LVADs), comparing them with subjects with chronic heart failure with reduced ejection fraction on the list for heart transplant (HT) and with HT patients with a normal systolic cardiac function to identify any differences. Methods We enrolled 28 subjects with LVAD, 55 subjects with HT, and 42 subjects with heart failure on the transplant list. The subjects underwent a general physical examination, assessment of laboratory blood parameters, and assessment of endothelial function through flow‐mediated dilation (FMD) of brachial artery. Results The three groups were homogeneous as regards age, gender, smoke abuse, C‐reactive protein (CRP) and FMD parameters (P = ns). In LVAD group percentage of FMD change showed an inverse correlation with CRP (rho: −0.5, P: 0.003), a well‐known marker of inflammation and tissue damage. Conclusions Continuous flow related to LVAD seems to not worsen endothelial function. Endothelial function was not affected by cardiovascular risk factors (hypertension, hypercholesterolaemia, diabetes, obesity, and tobacco habit), by the functional status expressed by New York Heart Association class, by the left ventricular systolic function and by the presence or absence of ischaemic heart disease in all the populations analysed. CRP was the only factor able to influence percentage of FMD change in patient with LVAD, reinforcing the hypothesis that inflammation is the main determinant of endothelial function. Introduction Continuous flow left ventricular assist devices (LVADs) represent a valid therapeutic option in the treatment of patients with chronic refractory heart failure (HF). 1 These devices can draw blood from the left ventricle and pump it directly into the aorta, supplanting the depressed function of the left heart. 2 In recent years, LVADs have mainly been used as a bridge to transplantation, but increasing long-term reliability is opening the door for use as a definitive solution, the so-called "destination therapy", for the treatment of terminal HF. 1,[3][4][5] Current guidelines suggest the implantation of these devices as destination therapy in subjects with chronic refractory HF despite medical therapy, ineligible to heart transplantation, Interagency Registry for Mechanically Assisted Circulatory Support (INTERMACS) level ≥2. 1 The latest generation devices consist of rotating continuous flow pumps, of limited size, able to generate a range of up to 10/12 L/min of flow. Given the continuous flow, they do not contain valves. 6 These devices have been shown to improve morbidity and mortality in critically ill patients waiting for heart transplant (HT) and at the same time to reduce adverse events, they also can determine a reverse remodelling of the heart in patients with nonischaemic cardiomyopathy and in a smaller subset of patients with ischaemic cardiomyopathy. [7][8][9] One potential adverse event of long-term continuous flow LVAD support is arterial endothelial cell dysfunction that could result in impaired vascular reactivity. After the implementation of LVAD circulatory support, the pulsatile nature of the arterial flow pattern decreases dramatically. In addition to the longitudinal stretching forces, the so-called "shear stress", the cyclical deformation produced by the pulsatility of the flow represents an independent modulator of the endothelial function, able, in fact, to exert an impact on nitric oxide synthase, cell Ph and blood cell physical alignment. 10 Pulsatile shear stress and cyclic strain of an appropriate magnitude are requisite to maintain endothelial cell homeostasis. 11 The inflammatory status with high levels of cytokines in subjects with LVAD could contribute to the worsening of endothelial function. 12 Most of the available studies in the literature focusing on endothelial function in LVADs subjects have used healthy subjects as controls, making the comparison unreliable due to the different characteristics of populations. The purpose of our study was to evaluate endothelial function in subjects with LVADs, comparing them with subjects with chronic HF with reduced ejection fraction on the list for HT and with HT patients. The categories of subjects analysed represent three different expressions of the same pathology, the heart failure, in different clinical scenarios: in the end stage of HF, after HT, and after ventricular assistance devices implantation in those ineligibles for transplantation. The aim of our study was to detect any determinant of endothelial function in these three groups of subjects to evaluate the specific effect of continuous flow. Material and methods We performed an observational two-centres study on 28 subjects with LVADs, 55 subjects with HT, and 42 subjects with HF on the transplant list. Subjects were evaluated at the HF Clinics of 'Cardiology Unit' of the University of Bari and of the 'Niguardia' Hospital of Milan during the period from January 2018 to June 2019. We enrolled only subject with LVAD and HT after at least 1 year from the intervention, all the subjects were evaluated in optimal medical therapy and after at least 1 month of clinical stability. We excluded subjects with important non-cardiological comorbidities: that is, symptomatic cerebrovascular diseases and relevant kidney diseases requiring dialysis. The subjects underwent a general physical examination, assessment of laboratory blood parameters, and the assessment of endothelial function through flow-mediated dilation (FMD) of brachial artery. Patients were informed about the aim of the study and signed consent forms. The study was approved by the Institutional Review Board of the two hospitals and carried out in accordance with the principles of the Declaration of Helsinki. Clinical and laboratory evaluation The demographic and clinical characteristics are shown in Table 1. Height (cm), weight (kg), arterial blood pressure, and heart rate were measured, and body mass index (BMI) (kg/m 2 ) was calculated. Functional status was evaluated using the New York Heart Association (NYHA) functional classification. 13 Mean arterial pressure was determined in LVAD subjects using Omron HBP-1300 oscillometric device at the level of brachial artery, 14 while in subjects with HF and HT using the standard method. 15 Subjects were classified as hypertensive, if they had systolic/diastolic blood pressure values >140/90 mmHg, or if they assumed anti-hypertensive medication, 16 dyslipidaemic in presence of serum total cholesterol >200 mg/dL, low-density lipoprotein cholesterol >115 mg/dL, high-density lipoprotein cholesterol <40 mg/dL, triglycerides >150 mg/dL, or if they used lipid-lowering agents, 16 are diabetic in presence of fasting glucose level >126 mg/dL, or when they assumed antidiabetic drugs 16 ; overweight and obese if their BMI was ranging from 25 kg/m 2 to 29.9 kg/m 2 or >30 kg/m 2 , respectively. 16 Finally, each participant was considered a "current daily smoker" in presence of a daily consumption of at least five cigarettes a day in the previous 3 months or if they had stopped smoking for less than one year from admission. 16 C-reactive protein (CRP) and N terminal pro brain natriuretic peptide (NT-proBNP) values, using immunoenzymatic assay, were also obtained. Endothelial function was evaluated by using FMD of brachial artery, according to the standard protocol. 17 The occlusion cuff was placed around the forearm, distal to the ultrasound probe. 16 To avoid confounding factors, the data collected underwent to an off-line analysis by a blinded observer with the determination of the following data: baseline diameter of brachial artery, peak diameter of the brachial artery, absolute FMD change, and percentage FMD change. 17 Left ventricular ejection fraction was also assessed in all subjects using the echocardiographic evaluation through the Simpson method. 18 Statistical analysis The data were analysed using the Kolmogorov-Smirnov test to determine their distribution. Statistical significance between the groups was calculated on data with normal distribution using the Student t test for independent samples and for non-normal distributed data using the Kruskal-Wallis test and Mann-Whitney's U tests. The correction analysis with Bonferroni test was used to compare quantitative variables between the groups. Correlation analysis was performed with the Spearman rank correlation test. Statistical significance was considered for P < 0.05. All statistical analyses were performed with the SPSS Statistics 20 software. Results Demographic characteristics of the population are shown in Table 1. The three groups were homogeneous as regards age, gender, smoke abuse, CRP, and FMD parameters, Table 2. LVADs subjects had higher BMI values than those with HT and those with chronic HF. Average heart rate was lower compared with subjects with HT and higher compared with subjects with HF. NT-proBNP values of subjects with HT were significantly lower compared to subjects with LVADs and with HF. Finally, HF subjects had a significantly lower heart rate compared to the other two groups, higher values of NT-pro BNP, and a lower prevalence of ischaemic heart disease compared with LVADs subjects. Because no significant differences were found among the groups about the several parameters obtained by using Table 2), we performed the correlation analyses with the main characteristics of the population using only the percentage of FMD change, the most widely used index of endothelial function, Table 3. 19 This analysis showed only a significant negative correlation between FMD and CRP values among subjects with LVAD. No other significant correlations were found between population's characteristics and percentage difference of FMD. Moreover, no other significant correlations were found among CPR and the other parameters of endothelial function (baseline and peak diameter of brachial artery and absolute FMD change), Table 4. Discussion The objective of our study was to investigate whether the use of continuous flow LVADs could exert negative effects on endothelial function compared to subjects with homogeneous anthropometric characteristics with chronic HF and HT. At this purpose, we compared three populations, expression of different stages of the same pathology, that is chronic HF. Our analysis showed no significance differences as regards the several FMD parameters among subjects with LVAD, HF, and HT. Moreover, endothelial function, expressed as percentage difference of FMD, was not affected by cardiovascular risk Abbreviations: BMI, body mass index; CRP, C-reactive protein; DBP, diastolic blood pressure; FMD, flow-mediated dilation; HF, heart failure; HT, heart transplantation; LVAD, left ventricular assist device; LVEF, left ventricle ejection fraction; NYHA, New York Heart Association; SBP, systolic blood pressure. Statistical significance was considered for P < 0.05. factors (hypertension, hypercholesterolaemia, diabetes, obesity, and tobacco habit), by the functional status expressed by NYHA class, by the left ventricular systolic function, and by the presence or absence of ischaemic heart disease in all the populations analysed ( Table 3). No further damage at the endothelial level is therefore associated with the use of these devices, as already demonstrated. 20,21 In LVAD, cohort percentage of FMD change showed an inverse correlation with CRP (Table 3), a well-known marker of inflammation and tissue damage, associated with atherothrombotic events both in patients with known cardiovascular disease and in healthy individuals. 22 The results show us how the mechanisms for regulating endothelial function are complex and not easily classified into specific categories, but inflammation plays the main role. 22 Although the favourable effects on the endothelial function of the first generation of pulsatile flow LVADs have been widely demonstrated, 23,24 the increasingly widespread use of continuous flow LVADs makes open the discussion of the long-term effects of continuous flow on the cardiovascular system. In addition to longitudinal stretching forces, the so-called "shear stress", the cyclical deformation produced by the pulsatility of the flow, represents an independent modulator of the endothelial function, 25 able, in fact, to exert an impact on nitric oxide synthase, cellular Ph, and physical alignment of blood cells, with effects also at the genic level. 10 The loss of pulse pressure could significantly endothelial function. While the autoptic study of Potapov's group found no histological differences in the vascular beds between subjects with HF and subjects with continuous LVADs, 26 Segura's group highlighted the presence of changes in aortic tunica media related to continuous LVADs. 27 In addition, reduced pulsatility and cyclical deformation cause atrophy of vascular walls and reduction of vascular calibre. 28 On the other hand, studies on humans are limited and show a deterioration or no improvement in endothelial function in subjects with LVADs. In the study of Amir et al., subjects with continuous-flow LVADs had significantly lower FMD values than patients with pulse-flow LVAD, the latter associated with better vascular reactivity. 29 In 2012, the study of Lou X et al. 20 evaluated endothelial function, through changes in the plethysmographic signal at the level of the finger artery for 5 min after reactive hyperaemia, in a group of seven patients in NYHA IV class before the LVAD implant, in a second group of six patients 1-4 months after the LVAD implant and in a third group of seven healthy subjects of the same age was used as a control group. The results of the study showed significantly higher values of the reactive hyperaemia index (endothelial function measure) in the control group than in patients with HF and LVAD, while no difference between subjects with HF and LVAD, showing that the presence of LVAD had no effects on endothelial function in patients with HF. On the other hand, Hasin et al. 30 evaluated the reactive hyperaemia index in eight subjects with HF before and after (5-14 days, 1-2 months, and 3-6 months) the LVAD implantation. The study showed a progressive decline in the hyperaemia index and therefore a worsening of endothelial function related to the LVAD. In 2013, Morgan et al. 31 showed no significant differences as regards FMD among 20 patients with LVAD, 19 patients with HF, and 19 patients with HT. Later, in 2015, the study of Hasin et al. 32 showed a persistent decline of endothelial function, evaluated through the reactive hyperaemia index, up to 5 months later LVAD implantation in 18 subjects with a parallel increase of adverse cardiovascular events. Recently, the study of Symons et al. showed no effect of durable continuous flow LVAD support on coronary artery endothelial function and even an improvement in subjects with non-ischaemic dilated cardiomyopathy, using ex vivo isometric tension procedures among 16 patients with ischaemic cardiomyopathy, 22 patients with non-ischaemic-cardiomyopathy, and in 7 donor controls. 21 Limitations The study has some limitations: first, the small dimension of the samples analysed; second, we did not evaluate endothelium-independent dilation through the sublingual glyceryl trinitrate administration for the high risk of side effects in these populations of subjects already under hypotensive drugs; third, shear rate stimulus was not assessed. 17 Conclusions Our study demonstrated no significant effect of continuous flow LVAD on endothelial function compared with two homogeneous groups by age and gender of subjects with HF and HT. Endothelial function was not affected by cardiovascular risk factors (hypertension, hypercholesterolaemia, diabetes, obesity, and tobacco habit), by the functional status expressed by NYHA class, by the left ventricular systolic function, and by the presence or absence of ischaemic heart disease in all the populations analysed. CRP was the only factor able to influence percentage FMD change in LVAD subjects, reinforcing the hypothesis that inflammation is the main determinant of endothelial function. In conclusion, continuous flow related to LVAD seems to not worsen endothelial function. Larger studies are needed to confirm this concept.
2021-08-01T06:17:09.523Z
2021-07-31T00:00:00.000
{ "year": 2021, "sha1": "388227ef3f5451602bfcebd0b6ccc00332e3a158", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ehf2.13484", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7f4bba304e11b197535298ded820e62ebaa189b3", "s2fieldsofstudy": [ "Medicine", "Engineering", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
73107085
pes2o/s2orc
v3-fos-license
Knowledge of Primary Health Care Providers in Nairobi East District, Kenya, Regarding HIV-related Oral Facial and Other Common Oral Diseases and Conditions Background: In the Kenya primary health care (PHC) setting where most patients, including nearly 1.4 million HIV-infected people, seek medical care, PHC providers are expected to identify and manage HIV-related oral diseases during general consultations. This study aimed to assess the Original Research Article current knowledge of clinical officers and nurses in Nairobi East district of Kenya regarding HIV-related oral diseases and conditions. Design and Methods: A 40-item questionnaire was used in interviewing all 57 PHC providers in 2 administrative divisions in the district in a cross-sectional survey. Assessed categories were: knowledge about HIV-related oral lesions, clinical appearance of HIV-suspected conditions, knowledge about oro-pharyngeal candidiasis (OPC), general dental knowledge, common appearances of OPC, knowledge about periodontitis, causes of dental caries, frequency of general oral examinations and past training in oral health topics. The first 4 categories were confirmed as sub domains, with Cronbach’s alpha of 0.57, 0.54, 0.59 and 0.45 respectively. Results: All 57 PHC providers (15 clinical officers and 42 nurses) completed the questionnaire (response rate 100%). PHC providers did not routinely perform oral examinations. Their knowledge about HIV-related oral health topics and general oral health was found to be generally inadequate. Recommendations: A training module on HIV-related oro-facial lesions for Nairobi PHC providers, incorporating a practical session covering oral examinations, is recommended; especially in this high HIV-prevalence environment. INTRODUCTION Nearly 90% of individuals infected with human immunodeficiency virus (HIV) develop visible signs of oral diseases in the course of their illness [1].The more common HIV-related oral lesions include oro-pharyngeal candidiasis (OPC), oral hairy leukoplakia, oral Kaposi's sarcoma, necrotising gingivitis and enlargement of the parotid glands [2][3][4][5][6][7].Some lesions appear soon after HIV infection, allowing early identification of this infection [3].The presence of these lesions may also serve as an early warning sign of non-response to treatment in patients already receiving highly active antiretroviral therapy (HAART), and of lowered immunological functions [8][9][10][11].Early identification and management of the lesions may eventually improve quality of care for these patients, as well as their quality of life. In Kenya most patients, including an estimated 1.4 million HIV-infected ones [12], seek medical care in primary healthcare (PHC) settings [13,14].Clinical officers and nurses, the main service providers in PHC establishments, are trained to perform general medical examinations, make diagnoses and treat or refer patients to higher levels of care.As first-line service providers they are also expected to perform oral health tasks during consultations and record the results daily on government tally sheets [15].They need competences and skills to perform the expected oral healthcare tasks.However, these are assumed to be inadequate.To investigate their adequacy, the present study aimed to identify gaps in the knowledge of clinical officers and nurses regarding HIV-related oral lesions and other common oral conditions.It was conducted in the Nairobi East district. DESIGN AND METHODS Approval for this study was obtained from the Kenyatta National Hospital/University of Nairobi Ethics and Research Committee (approval number KNH-ERC/A/474) and from the Ministry of Public Health and Sanitation (Ref.NO.MPHS/IB/1/14 Vol.III).Written consent for the study to be conducted in Nairobi East district was received from the provincial and district heads of the department.This trial was registered in the Netherlands Trial Register (http://www.trialregister.nl,NTR2627). Selection of Participants Nairobi province as a capital city, has the largest number of clinical staff (medical doctors, dentists, nurses and clinical officers) in the country, with highly accessible public HFs, in comparison to other regions in the country [16,17,18,19].The proportions of these clinical staff nationally and in Nairobi East district are comparable [20], as shown in (Table 1).Of the total 466 clinical officers and nurses, 270 work in 54 public health facilities.The study was conducted among all 57 PHC providers in 4 health facilities (HFs) in Njiru (n=32) and 4 HFs in Makadara (n=25) divisions in the Nairobi East district.These were test and control divisions, as described in our earlier publication [21].dental researchers evaluated the appropriateness of the questionnaire.Questions were checked for simplicity and clarity and modified in accordance with their comments.The final questionnaire contained forty items (Fig. 1).The domain with questions pertaining to PHC provider knowledge concerning HIV-related orofacial lesions comprised seventeen items.The domain covering questions on knowledge about general oral conditions comprised fifteen openended items.Three items at the beginning of the questionnaire were related to background characteristics (age, gender and qualification).The last five items were contained in questions related to training, clinical experience and performance regarding oral examinations.This questionnaire is also discussed in our earlier publication [21].In generating an initial codebook the panel held a discussion in order to reach consensus on all possible correct and incorrect responses to open-ended questions with multiple responses.(Figs. 2 -22) show examples of several oral facial and other common oral diseases and conditions. Implementation The questionnaire was presented on the same day to all (n=57) PHC providers.All participants were given a brief spoken introduction to the confidentially coded questionnaire before they answered it.Participants were allowed to seek clarification regarding the questions.Each filled questionnaire was checked for completion before it was collected.Open-ended questionnaires with no right or wrong answer, such as those that needed 'further explanation' were noted, as were 'I do not know" responses.After coding of all scripts, results were checked for agreements and differences.Disagreements were discussed until consensus was reached. Statistical Analysis SAS software (SAS institute, Cary.NC, USA) was used for statistical analysis.The two aforementioned domains (knowledge about HIVrelated oro-facial lesions and knowledge about general oral health) contained 7 categories of questions.For each category mean values and standard deviations were calculated.Cronbach's alpha coefficients were used for checking internal consistency. Characteristics of Participants All 57 PHC providers (15 clinical officers and 42 nurses; response rate 100%) completed the questionnaire.There were more female nurses (n=32) and female COs (n=15) than male ones.Sixty percent of the participants were 40 to 52 years old and 40% were 23 to 39 years old. Cronbach's alpha identified four sub-domains (from the 7 categories of questions), i.e., knowledge of symptoms of HIV-related oral lesions; knowledge of clinical appearance of HIVsuspected conditions; knowledge of OPC; knowledge about common appearances of OPC, and general dental knowledge (Table 2). Knowledge of/about Symptoms of HIV-related Oro-facial Lesions The mean domain score for PHC providers, relating to knowledge of common HIV-related oro-facial lesions; angular cheilitis, OPC, necrotising gingivitis/periodontitis, herpes zoster and Kaposi's sarcoma, was 4.6 (from maximum 7).This was higher than scores for the other three domains.The most highly scored item was related to the knowledge of herpes zoster as an HIV-related oro-facial lesion (0.94 out of maximum 1). Knowledge of Clinical Appearance of HIV-suspected Conditions Knowledge of three lesions -herpes simplex, herpes zoster and angular cheilitis was assessed.The mean score, 1.74 out of maximum possible 3, was considered moderate.Again, PHC providers were more familiar with herpes zoster than with the other two lesions. Knowledge of OPC Knowledge of signs and symptoms of pseudomebraneous OPC and reasons why this lesion was of significance in HAART patients was assessed.The moderate domain mean score of 1.84 out of maximum 3 was mainly derived from the PHC knowledge about clinical signs of pseudomebraneous OPC and associated symptoms such as pain experienced in eating and swallowing (0.78 and 0.68 respectively, out of maximum 1).The score on the clinical significance of this lesion as a marker of HAART failure was low (0.4 out of maximum 1). Knowledge about Common Appearances of OPC, and General Dental Knowledge Responses regarding three visual clinical aspects of OPC resulted in a mean score of 1.05 out of maximum 3. Knowledge about the pseudomebraneous type mainly contributed to the score.Eighteen percent knew 2 types: the second most-mentioned type of OPC being the erythematous type, while 12% of the respondents knew nothing about the clinical appearance of OPC.None of the PHC providers knew all three common clinical features. Items in this domain related to questions commonly asked by community members regarding their oral health; such as those covering xerostomia, dental fluorosis and oral ulcerations and unusual patches in the mouth.The mean domain score of the PHCs was 2.66 out of maximum 6. Responses to causes of dental caries and signs and symptoms of periodontitis showed the lowest scores in the entire questionnaire (1.18 and 1.42, out of maximum 3, respectively).Most (79%), of the PHC providers knew of one or two factors that cause dental caries: mainly sugary foods and poor oral hygiene.Nearly 30% of the PHC providers did not know any of the three signs and/or symptoms of periodontal disease. Frequency of Doing an Oral Examination It was found that 41% of the PHC providers would perform an oral examination for a maximum of 10 out of any 50 patients who presented with an oral problem.Another 37% would examine between 10 and 40 of 50 patients.A minority, 22%, would examine more than 40 of 50 patients. DISCUSSION This study examined the knowledge of PHC providers, the main healthcare providers in the Nairobi PHC facilities, regarding common signs of HIV-related oro-facial lesions and other general oral conditions. Knowledge of HIV-related Oro-facial Lesions The results indicated that PHC providers had only moderate knowledge about HIV-related orofacial lesions.This was unexpected, as the current prevalence of HIV infection is high in Kenya [13].Moreover, the in-service training packages on management of HIV patients [23-25], routinely delivered to PHC providers, should have enhanced their skills.An expectation was that PHC providers would also know that OPC is not always a sign of HIV infection, since they handle patients with OPC in various clinics; such as child welfare and diabetes clinics.Contrary to this expectation, a large number of PHC providers (39%) still thought that OPC was always a sign of HIV infection, probably because this lesion is the one most frequently detected in HIV patients. OPC is of great clinical significance, both in patients of unknown HIV status and in those already receiving HAART treatment.This lesion clinically presents as pseudomebraneous or erythematous types or as angular cheilitis, usually resulting in complaints of pain and difficulty in eating and swallowing [26].However, only a minority (22%) of the PHC providers routinely examined mouths of their patients.This percentage is lower than that of 32.2% observed among nurses in South Africa [27].Even in the intensive care unit setting, where oral care is part of the protocol for critical patient care, 21% of the nurses did not assess mouths of patients (28). In similarity to other studies among nurses and dental students [27,29,30,31], PHC providers were more knowledgeable about OPC than about other (less frequent) HIV-related oral lesions.In this study it was noted that PHC providers were more familiar with the pseudomebraneous type than the other two types of OPC.Although the question read 'oropharyngeal candidiasis', the tongue was the most common site mentioned.It is possible that the PHC providers frequently examine the tongue when they routinely check for pallor.This agrees with the response of PHC providers: that in their routine practice they mainly examine teeth and swollen gums and sometimes, the tongue.It is evident that on the few occasions when the PHC providers examine the mouth, they miss out on the diagnosis of OPC when it is present in other parts of the mouth, such as the pharynx and the buccal mucosa, and when it is an erythematous type or angular cheilitis.Patients with HAART drug resistance often develop any type of OPC.This lesion may be common in this population, where inadequate patient access to HAART and nutritional support, as well tuberculosis (TB)/HIV co-infection, contribute to non-adherence and HAART drug resistance [32,33].Although this lesion is often managed in the HIV comprehensive care clinics as an opportunistic infection, PHC providers lacked knowledge about its clinical significance as a marker of HAART treatment failure.They mainly attributed its presence to side-effects of HAART.PHC providers in this setting miss opportunities to recognize HAART failure among their patients who develop OPC. PHC providers did not suspect HIV infection in a child with enlarged parotid glands (diffuse bilateral non-painful swelling), even when the clinical history was suggestive of HIV exposure.Most associated it either with mumps or with a dental abscess.This gap between knowledge of HIV-related lesions and knowledge of the association of the lesions with HIV infection was also observed among dental students and oral hygienists [34].However, knowledge about the lesions and about association of the lesions with HIV infection was noted as being generally higher among dental students than among nurses [29,[34][35][36].Given the high prevalence of HIV infection in this community, HIV-related lesions that were assessed were likely to have been identified in the consulting rooms of the PC providers.Agbelusi and Wright, 2005 [37] showed that laboratory tests confirmed that nearly all (92%) patients identified in a dental clinic as having HIV-related oral lesions were HIV-infected.Moderate, and in some aspects low, knowledge of PHC providers in identifying these lesions, and lack of association of the lesions with HIV infection, indicates missed opportunities for HIV testing and early detection and treatment in this setting.Moreover, nearly all PHC providers said that they never performed an extra-oral examination in routine practice. Knowledge of General Oral Health Assessment in this domain was related to questions that patients and community members commonly ask health personnel regarding their oral health.These included dental caries; periodontal diseases, mouth ulcerations and dental fluorosis, which PHC providers are expected to record in the outpatient tally sheet.In similarity to others [27,36,38,39], this study indicated that PHC providers in this setting lacked sufficient knowledge on oral health topics.Their knowledge regarding general oral health was much lower than for HIV-related oral lesions.However, it was noted that the majority, 80%, knew that sugar and poor oral hygiene contribute to dental caries.Information on the importance of good oral health could be integrated into daily health talks in PHC facilities, especially in antenatal care and child welfare clinics, to prevent dental decay in children. Although all patients who seek general consultations with the PHC providers should ideally be orally examined, the choice of patients was narrowed down to those presenting complaints that warranted an oral examination, e.g., pain experience when chewing and swallowing, skin rashes and febrile conditions. The low scores suggested that extra-oral examinations were rarely performed.PHC providers reported that when they occasionally did intra-oral examinations, they generally looked at the condition of teeth and gums and were therefore likely to miss other oral pathologies.In personal communication with LNK, the PHC stated that they examined the throat only when the patient had a suggestive history, such as 'upper respiratory tract infection' and tonsillitis.In an earlier study [40], PHC providers indicated that they were willing to do oral examinations but failed to do so, owing to lack of basic tools such as spatulas, mouth masks and torches.Results of the present study show that they also lacked knowledge about performing expected oral healthcare tasks. The PHC providers reported that oral health topics were scantily covered in their pre-service training (some of the short in-service courses that they regularly receive, such as the integrated management of adulthood illnesses [25], include an oral health module on HIV care).Apparently, the training modules do not adequately cover the practical skills aspects that the PHC providers need for identifying and managing HIV-related oral lesions.This is partially understandable, as recognition of dental diseases needs much experience and training.PHC providers cannot be expected to gain sufficient knowledge and insight regarding dental diseases in short training courses.A training module for the PHC providers, incorporating a practical session on how to perform a basic oral examination and recognise (HIV-related) oral lesions is recommended, to build their skills and competences. Significance for Public Health Immunological impairment in HIV patients often predisposes them to HIV-related oral lesions.Therefore, comprehensive care of these patients needs healthcare workers to include oral healthcare of the patients, at all levels of care.Targeted health personnel at PHC level are in many cases non-dental professionals who are expected to integrate diagnosis and management of oral lesions into their routine practice.These health workers need much training and sufficient skills to perform the expected tasks. Integrating oral health care into PHC is a global and a national oral policy objective, which calls for development of a 'special curriculum for existing training programs for special groups'.As no questionnaire was available in the literature, assessment of knowledge gaps was the first important step in tailor-making a training program for PHC providers.The questionnaire that was used can be adapted in different settings. CONCLUSION Knowledge of primary health care providers in Nairobi East District, Kenya, regarding HIVrelated oral facial and other common oral diseases and conditions was found to be generally inadequate.Peer-review history: The peer review history for this paper can be accessed here: http://www.sciencedomain.org/review-history.php?iid=672&id=12&aid=6291 Table 1 . Comparison of proportions of clinical staff (medical doctors, dentists, nurses and clinical officers) Kenya: countrywide and in Nairobi east district in 2009 Source: * [16;17], **[20]All clinical officers and nurses offering clinical services were included.A 4-year diploma course in clinical medicine provides basic training for clinical officers.Registered and enrolled nurses complete 3.5 and 2 years of training at diploma and certificate levels, respectively. Trained data clerks entered raw data into an Excel file.A random check on the entered data was done by LNK.Two experienced dentists (LNK and ED) coded all answers to open-ended questions.The codebook was improved through use of selected questionnaire responses[22]. Table 2 . Cronbach's alpha, mean, standard deviation, minimum and maximum scores by domains retrieved from the questionnaire on PHC providers' knowledge of HIV-related oral facial lesions and other common oral lesions and conditions Domain/ Categories of questions Cronbach's alpha Mean Std dev Minimum Maximum Knowledge on HIV-related oro-facial diseases *sub domain
2019-03-11T13:12:08.108Z
2015-01-10T00:00:00.000
{ "year": 2015, "sha1": "8d1f988cfabda57d5c15a8f65b1b1a0a99039f38", "oa_license": "CCBY", "oa_url": "https://repository.ubn.ru.nl/bitstream/handle/2066/173469/1/173469.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "6eb41d3a868ed5701d8371f8b4320c2172788a5e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257085244
pes2o/s2orc
v3-fos-license
A value-driven approach to addressing misinformation in social media Misinformation in social media is an actual and contested policy problem given its outreach and the variety of stakeholders involved. In particular, increased social media use makes the spread of misinformation almost universal. Here we demonstrate a framework for evaluating tools for detecting misinformation using a preference elicitation approach, as well as an integrated decision analytic process for evaluating desirable features of systems for combatting misinformation. The framework was tested in three countries (Austria, Greece, and Sweden) with three groups of stakeholders (policymakers, journalists, and citizens). Multi-criteria decision analysis was the methodological basis for the research. The results showed that participants prioritised information regarding the actors behind the distribution of misinformation and tracing the life cycle of misinformative posts. Another important criterion was whether someone intended to delude others, which shows a preference for trust, accountability, and quality in, for instance, journalism. Also, how misinformation travels is important. However, all criteria that involved active contributions to dealing with misinformation were ranked low in importance, which shows that participants may not have felt personally involved enough in the subject or situation. The results also show differences in preferences for tools that are influenced by cultural background and that might be considered in the further development of tools. Introduction M isinformation in social media is currently attracting a lot of attention. Misinformation is not a new phenomenon and has probably existed since the dawn of humanity. Structural evidence of scientific research on misinformation can be found Allport and Postman's (1946) basic law of rumour, which demonstrates that the strength of a rumour is dependent on the importance of the subject and individual concerns regarding it, as well as the time and ambiguity of the evidence on the topic. New technical capabilities, such as social media, have naturally made these effects more universal. The 2000s witnessed rapid developments in social media and its increased outreach to everybody with Internet access. This has facilitated the spread of information, including misinformation and rumours, in virtually everything from local neighbourhoods to global concerns (Del Vicario et al., 2016). Until recently, there has been limited scientific evidence on how to deal with misinformation, but research on the topic has increased rapidly over the past few years. For instance, researchers have suggested various ways of dealing with citizen awareness, such as nudging, as a way of vaccinating social media users against misinformation (Piccolo et al., 2019). Other topics studied include nudging for accuracy in sharing on social media (Pennycook et al., 2020) and the limits of human cognition in dealing with and spreading misinformation. Finally, researchers have examined a variety of approaches for making fact-checking more efficient, such as automatic detection of misinformation and correction of data, while at the same time pointing out the importance of human fact-checkers, as fully automated fact-checking methods are not yet strong enough. 1 This systemic problem requires stakeholder involvement at different levels, as misinformation is so widespread and constantly changing. Extensive stakeholder involvement is necessary for designing policies, methods, and tools. However, existing approaches to developing online tools tend to follow the traditional path of dissemination of knowledge from science to stakeholders while viewing technology users as passive consumers of finished products rather than active co-creators. This is particularly alarming today when available anti-misinformation products and tools are still new to the mass market and hence malleable, which is rare in the life cycle of a product (Smith and Medin, 1981;Svahn and Lange, 2009). Value-based software engineering (Boehm, 2003) is an emerging approach that aims to develop software tools (e.g., the tool by Aurum and Wohlin, 2007) based on the values and objectives of various stakeholder groups (Biffl et al., 2006), providing an economic categorisation of the value concept based on the monetary exchange between a customer and a provider. In this study, we investigate two major research questions: • What are preferences for, perceptions of, and views of the features of tools for dealing with misinformation? • How do these preferences depend on the cultural backgrounds of stakeholder groups and participants? Our goal is to study the preferences of various stakeholder groups for features of tools, to study the impact of cultural background on these preferences, and to develop recommendations for considering these preferences in the further development of tools for dealing with misinformation. The next section provides a background of misinformation and discusses why we need automatic tools in a general setting. Section 'Methodology' describes the integrated methodology used, and Section 'Results' presents the results and a discussion. Finally, Section 'Conclusions' concludes the article. Background A variety of definitions exist for misinformation, disinformation, fake news, rumours, and similar terms, and a large number of them emphasise the distinctions between misinformation and disinformation, as well as between disinformation and fake news. A review of the 2016 Presidential election in the United States, for instance, identified six different types of misinformation: authentic material used in the wrong context, imposter news sites designed to look like known brands, fake news sites, fake information, manipulated content, and parody content (Wardle, 2016). Wardle and Derakhshan (2017) suggested that misinformation refers to misleading information created without the intent to harm, whereas disinformation refers to information deliberately fabricated with the intent to impact social groups or societies. Burgoon et al. (2003) discussed misinformation in terms of deceptive language and false context. Farrel et al. (2018) distinguished between disinformation and misinformation, considering both subsets of misinformation: Disinformation largely involves the intent to deceive, whereas misinformation does not need to involve intentional deception. Giglietto et al. (2016) proposed a taxonomy based on perceptions of the source, the story, and the context and decisions of the audience and the propagator. In their taxonomy, there is "pure disinformation" when both the original author and the propagator are aware of the false nature of information but nevertheless decide to share it. There is "misinformation propagated through disinformation" when information is originally produced as true and then shared by a propagator who thinks it is false. There is also "disinformation propagated through misinformation" when information is devised as false by a creator but is perceived as true by a propagator. Irrespective of such distinctions, both misinformation and disinformation impact the public debate on issues such as health and science (e.g., the anti-vaccine movement), foreign policy (e.g., the wars in Iraq and Ukraine), migration, elections and so on. Recognising this, researchers from a variety of disciplines, including social sciences such as journalism (Ekström et al., 2019) and psychology (Ecker, 2017), have examined misinformation and disinformation. The problems of misinformation and disinformation are usually called "wicked problems" by design scientists, as no single comprehensive solution is capable of fully resolving them and attempts to mitigate them often can make them worse. Some examples of this include the backfire effect (Nyhan and Reifler, 2010), false misinformation warnings (Freeze et al., 2020), and the naiveté of social engineering in technology (Tromble and McGregor, 2019). Misinformation and disinformation are also studied with regard to social psychology (e.g., people's values, beliefs, information literacy, and motivations), regulatory and technical perspectives (social media, detection tools), and the practice of fact-checking. Given the large volume of published work we rely here on Vanenzuala et al. (2019), who conducted a meta-analysis of 650 articles on this topic to identify regulatory, technical, and normative aspects of misinformation. Cognitive psychologists have investigated the effectiveness of corrections and warnings of misinformation for a long time. Ecker et al. (2010) studied whether the continued influence of misinformation can be reduced by explicit warnings at the outset that people may be misled. They found that a specific warning with detailed information was more efficient than a general warning reminding people that facts are not always properly checked. However, a specific warning can reduce reliance on an outdated source of information but not eliminate it. Pennycook et al. (2018) investigated how fluency via prior exposure contributes to the believability of fake news. They found that tagging fake stories as disputed is not an effective solution because it simply attracts even more attention to the problem. They also found that repeating headlines increases perceptions of their accuracy. Schwarz et al. (2016) found that the myth-versus-fact article format is not efficient to deal with fake news because such articles subtly reinforce the myths through repetition and further increase the spread and acceptance of misinformation. Unfortunately, such articles make misinformation even more easily accessible by repeating it and illustrating it with pictures. This increases the probability that misinformation that the communicator wanted to debunk will continue to be delivered. They found that it is better to simply provide correct information rather than try to correct wrong information. They also identified five criteria that people use to assess the accuracy of information: acceptance by others, amount of supporting evidence, compatibility with one's own beliefs, general coherence of the statement, and credibility of the information source. Lerman (2016) stated that the interplay between humans' cognitive limits and the social media network structure influences the spread of information. Finally, Chan et al. (2017) found that debunking messages for the correction of misinformation only increases the effects of the misinformation. • Botometer detects social bots and classifies online social media user accounts as either bots or human beings. This classification is based on various features of the user account profile, online social network structures, historical patterns of activity, and language and sentiments (Yang et al., 2019; Botometer tool, 2019). • Foller.me analyses the profiles and tweets of social network users and shows various user characteristics, for example, general information such as name, location, language, join date, and time zone; statistics about tweets (number of tweets, followers, following); and tweet analysis (tweet replies, retweets, tweets with links). The main idea is to understand the detailed profiles of social media users to verify social media content (Sloan and Quan-Haase, 2016;Foller.Me tool, 2019). • TinEye analyses user-generated content, like photos and videos, as well as detects whether an image, audio content, or video content is fake (Middleton, 2017;Tineye Tool, 2019). Members of the global community, in particular journalists, use this tool and others, such as FotoForensics and Google Reverse Image, to examine user-generated content. • Rbutr is a machine-learning algorithm applied to community feedback to capture webpages with disputed, rebutted, or contradicted parts elsewhere on the Internet. This tool also provides sector-wise (e.g., health, education, immigrant, climate change) repositories of news and community rebuttal and provides warning messages (e.g., "This is potentially malicious") for particular news webpages with a bad reputation. • Fakespot is a browser plugin that assesses the validity of online reviews based on their URL Fakespot Analyzer Tool, 2019). • NewsGuard is another browser plugin that integrates the opinions of a large pool of journalists and informs users about the reliability of news websites and organisations. It uses nine journalist credibility and transparency criteria that are combined into labels (NewsGuard Tool, 2019). • Greek Hoaxes Detector is a browser plugin that analyses news articles and assigns labels such as "scam," "hoax" or "fake" (Ellinika Hoaxes Tool, 2019). • DejaVu is a system for detecting visual misinformation in the form of image manipulation aimed for use by journalists (Matatov et al., 2018). • Social Sensor is a software that gathers social media data and analyses trends and what influences them (Schifferes et al., 2014). The aforementioned tools were designed for particular purposes and are limited in several respects, such as the following: 1. Requirements for participation: Some tools were developed based on stakeholder feedback. However, the developers did not involve end users in the process of developing the tools. They also did not collect end users' preferences regarding these tools. When stakeholders were involved, it was frequently only one group of stakeholders or a very narrow circle of professionals who deal with misinformation. This has resulted in a narrow focus on professional intent instead of on how consumers of information can reduce their uptake of misinformation. 2. Technical issues: Almost all of these browser plugins support only Google Chrome. 3. Lack of integration of the views of fact-checkers: Factcheckers are part of a growing community that plays an essential role in media policies. However, several of these tools were developed without any consideration from this community, which has led to unnecessarily incomplete detection mechanisms. Consequently, the full potential of fact-checking services has not been fully realised, and the lack of transparency in development and input parameters makes them unclear. This has led to decreased user trust, which is why it seems reasonable to evaluate the functionality of existing fact-checking tools to identify possible gaps. This is best done in a collaborative environment with a high degree of involvement by relevant stakeholders (Horne et al., 2019). A few studies have focussed on assessing the perceived needs of journalists navigating misinformation. In Schifferes et al. (2014), 22 journalists participated in an interview regarding the functionalities most relevant for tools countering online misinformation. According to this study, journalists emphasise the need to predict breaking news and verify content on social media as true or false. Brandtzaeg et al. (2018) conducted a study with 32 journalists and social media users on perceptions of factchecking tools, concluding that users must be able to understand the limitations of tools and that tools need to be transparent on all ends, including in terms of funding. To the best of our knowledge, policymakers have not yet been included in such studies, although it is clear that policies desire the delivery of tools for dealing with misinformation. Participatory governance and value-based software engineering. Several scientific works have discussed the need to understand the typology and features of misinformation (Rossi and Lenzini, 2020;Koulolias et al., 2018). The design and evaluation process we argue for in this article involves two components: (a) co-creation by users and elicitation of user preferences and (b) adequate aggregation and evaluation mechanisms. By "co-creation," we mean a process that is aligned with Peters and Heraud (2015) and Gummesson et al. (2014) as an adaptive and inclusive approach to participatory governance based on the engagement and involvement of various stakeholder groups. Participatory governance, which is embodied in processes that empower citizens to participate in public decision making, has been gaining acceptance as an effective means of tackling democratic deficits and improving public accountability. Participatory governance and co-production processes require an understanding of human factors such as individual patterns of decision-making processes, as well as cognitive and behavioural biases; institutional structures; perceptions of the risks, benefits, and costs of various policy interventions; as well as a need for compromise-oriented solutions to honour diverse views and a variety of voices. Participatory governance also requires the involvement of various stakeholders. Stakeholder involvement in decisionmaking processes and in the development of tools and decision support systems is essential for meeting stakeholder requirements (cf. Komendantova et al., 2014). Furthermore, authors such as Kujala and Väänänem-Vainio-Mattila (2009) have shown that it is essential to consider stakeholders' values regarding the functionalities and features of a tool when designing new software and that tools so designed are more likely to be used by the groups in question. To achieve this, a number of techniques may be used. Khari and Kumar (2013) tested common approaches experimentally with stakeholders, concluding that value-oriented prioritisation (VOP) met the demands and the environment of the stakeholders better than other techniques. VOP, a so-called preference-based approach that relies on techniques and models from the field of decision analysis, aims to elicit users' values by studying their preferences (see Vetschera, 2006, for an introduction in the context of software engineering). Basic VOP is a scoring-based additive weighting approach in which a stakeholder or prospective user ranks features (or requirements) according to his or her value-in-use (see Azar et al., 2007). If there is more than one user or stakeholder, the VOP process turns into a group decision problem (i.e., gathering preferential data from several stakeholders or prospective users to identify a selection of features that provides maximum value to users while respecting the resources of the development team). However, VOP in itself is not flexible enough to handle ranking statements and aggregate preferences from several stakeholders in an elaborated way. For this purpose, there exist the novel methods from the field of decision analysis described in the following section. Methodology The empirical data in this study were collected during a cocreation process with stakeholder groups that used workshops and interviews to extract design components from stakeholder dialogues and findings. The aim was to provide insights into expected requirements for anti-disinformation tools. A specially adapted multi-criteria decision framework (Danielson et al., 2020) was then used to understand the desirability of various system features of a tool for mitigating misinformation. Workshop setup and participants. The co-creation workshops consisted of stakeholders from three groups (journalists/factcheckers, policymakers and citizens) in three countries (Austria, Greece and Sweden). The purpose of the workshops was to discuss misinformation and, over several sessions, collect perceptions of misinformation, test and discuss tools that address misinformation, as well as various features of these tools. Furthermore, we explored how information about particular online tools can be transferred to stimulate critical thinking and trust, as the latter is an important parameter in software adoption (Wu et al., 2011). We used the following sampling and invitation process. After thorough desktop research, a list of organisations was created that identified the most important stakeholders on the topic. A final contact list of various organisations representing our three stakeholder groups (policymakers, citizens, and journalists) was prepared. Hosting pilot team members were assigned to contact the organisations and to update the list accordingly. Subsequently, formal letters of invitation were issued to the target participants. The letter included a brief description of the Co-Inform program and the workshop objectives. It also included the workshop agenda (Appendix II). The team followed up with phone calls to the identified stakeholders and personally explained to them the goals of the project, the workshop methodology, and the importance of their participation. Two days before the event, a reminder e-mail was sent to the list of confirmed participants that provided them with more information about the location of the event. The formats of the workshops, as well as the sampling and invitation processes, were identical for all three countries to exclude the possibility that the results were influenced by differences in sampling process or format. The policymaker group consisted of government organisations (Ministry of Finance, Ministry of Education, Ministry of Health), nongovernmental organisations (Solidarity Now, Danish Refugee Council, UNHCR and others), grassroots organisations (domain expert organisations like Velos Youth Center), and municipality services organisations (organisations that provided aid to refugees, like Greek Refugee Council, could help us recruit refugees). The citizen group consisted of people from local communities, people from civil societies, refugees, migrants, as well as academics. The journalist group consisted of people from news agencies, radio, and television. The first co-creation workshop took place in September 2018 in Tokyo, Japan, and was organised by the International Council for Information Technology in Government Administration and the Organisation for Economic Co-operation and Development (OECD). The 103 participants at the first multi-stakeholder workshop included 11 government chief information officers, 65 high-ranking public officials, 8 journalists, 8 executives of international organisations, 9 executives from the private sector, and 2 policymakers. The purpose of the workshop was to assess the effects of misinformation in society and suggest mitigation strategies for the public sector. The second co-creation workshop was portioned among the three countries and took place in February-March 2019. The purpose of this workshop was to assess the initial needs of participants around misinformation, their level of trust in news sources, and their perceptions of misinformation and to collect their recommendations on possible interventions and policies. In Vienna, the Co-Inform workshop was organised in cooperation with the Ministry of Economy and Digitalization and included 21 policymaker, journalist, and citizen stakeholders, including representatives of the Austrian Chamber of Labour, the Housing Service of the Municipality of Vienna, and the Austrian Association of Cities and Towns. In Sweden, it included 16 participants, of whom four were journalists, five policymakers (mainly from the Social Democratic Party), and seven citizens (including from Anti-Rumour Sweden). It was hosted by the Botkyrka Multicultural Centre. In Greece, the workshop took place in the community of Serafeio with 31 participants (9 journalists, 9 policymakers, and 13 citizens), including representatives of the Ministries of Finance, Digital Policy, Health, Immigration and Education. The third co-creation workshop took place in these same countries in November 2019. The major theme of the third Co-Inform workshop was "Which features make people engage with misinformation-combatting tools, and why?" The theme was addressed over a series of five sessions: introduction to the overall workshop process, categorisation theory exercise, assessment of features of the interface of a potential tool, Multi-Criteria Decision Analysis (MCDA) sessions, and repertoire grid-nudging ARTICLE HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | https://doi.org/10.1057/s41599-020-00702-9 focus group sessions. Altogether 15 participants attended the third Co-Inform workshop in Sweden: 3 journalists, 1 policymaker, and 11 citizens. In Greece, 19 people participated: six citizens, seven journalists, and six policymakers. In Austria, 16 stakeholders attended the workshop: five citizens, six journalists, and five policymakers. The only difference among the aforementioned workshops was that the participants belonged to three different cultures: • Workshop 2 (as per our article): We recruited participants from all stakeholder groups (citizens, journalists, policymakers) who were related to organisations that worked with migrants. • Workshop 3 (as per our article): We recruited participants from all stakeholder groups (citizens, journalists, policymakers) without focussing on any specific domain. The format, agenda, and master plan of the workshops were as follows. All pilot countries (Greece, Austria and Sweden) followed a common format that included an agenda, templates, survey forms, exercises, and workshop sessions based on a common master plan that was prepared by the responsible Co-Inform project partners in consultation with Co-Inform project technical partners. Each workshop followed the same master plan. In addition, discussions were held in all three Co-Inform pilot countries on common topics as per the master plans of the workshops. A main objective of the third workshop was to collect input on perceptions of functionalities, user experience features, and system features of tools that deal with misinformation in social media. Four sessions were conducted during each workshop. During the first session, the participants were presented with features. This was followed by a detailed discussion of each feature and the collection of feedback on what should be included or added. The following features were subject to evaluation by the participants: • Feature 1 (Awareness): I am aware of existing misinformation online. • Feature 2 (Why and when): I want to know why a claim has been flagged as misinformative. And I want to know who flagged it and when. • Feature 3a (How it spreads and by whom): I come across something that I find misinformative. I would like to know how this information has spread online and who has shared it. • Feature 3b (Life cycle [timeline]): I want to know the life cycle (timeline) of a misinformative post/article (e.g., when it was first published, how many fact-checkers have debunked it, and when it was shared again). • Feature 4a (Sharing over time): I want to be able to quickly understand how much misinformation people have shared over time through an overall misinformation score. • Feature 4b (How misinformative an item is): I want to be able to quickly understand how much misinformation a news item or tweet may contain through the provision of an overall misinformation score. • Feature 5a (Instant feedback on arrival): When I encounter a tweet from someone else that contains misinformative content, I want to be informed that it is misinformative. • Feature 5b (Inform on consistent accounts): I want the Co-Inform system to inform me of which accounts (within my network) consistently generate/share/create misinformative content. • Feature 5c (Self-notification): I want the Co-Inform tools to notify me whenever I repeatedly share misinformation. • Feature 6 (Credibility indicators): I want to see credibility indicators that I will immediately understand, and I want the credibility indicators to look very familiar, like other indicators online. • Feature 8 (Post support or refute): I want to be able to post links to reputable articles and data that support or refute the story or claim. • Feature 9 (Tag veracity): I want to be able to tag the veracity of an element (tweet, story, image, or sentence/claim) in the current tab I am seeing. • Feature 10 (Platform feedback): I want to be able to receive feedback on what the platform is doing and has done with the tags and evidence I have submitted. The participants then ranked the features under three criteria, creating three different rankings of the features, one for each criterion. The three criteria were as follows: • Trust: for trust in this tool • Critical thinking: for making me think twice before I trust and/or share • Transparency: for transparency in how the tool makes judgements Thereafter, they ranked the three criteria with respect to their relative importance based on question on the form: "The topranked features under Trust, do they provide more or less value for you compared to the top-ranked features under Critical thinking?" If the answer was "yes," then trust was ranked above critical thinking (i.e., it was deemed to be of more relative importance, because the participant perceived greater value if the top-ranked features for trust were available compared to the topranked features for critical thinking). Elicitation and evaluation. Danielson et al.'s (2020) decision analytic framework was used as the rank-based elicitation and evaluation method. The method was implemented in DecideIT 3.1, which was also used in the workshops. Briefly, DecideIT is capable of operating with incomplete or numerically imprecise input data, such as rankings and interval value statements, in a combined model. To represent the ranking statements, we used a cardinal ranking approach (P-CAR). P-CAR is a calibrated method of creating feasible input in the form of surrogate imprecise value statements, which are derived from rankings provided by stakeholders. The feasible information is represented in the form of linear inequalities (greater than) in combination with interval bounds and a focal point that represents the most feasible surrogate value for a given element given its position in the ranking. This enables conventional multi-attribute value aggregation (Dyer and Sarin, 1979) so the results can be evaluated across multiple stakeholders and criteria. See Danielson and Ekenberg (2019) for details on P-CAR. The evaluation method originated from earlier work on evaluating decision situations involving numerically imprecise input. To avoid problems with aggregation when handling set membership functions and similar features, higher order distributions for better discrimination between the possible outcomes are introduced. To alleviate the problem of overlapping results, the methodology also contains a new evaluation method based on the resulting belief mass over the output intervals, without introducing further complicating aspects into the decision model. During the process, consideration is given to the entire range of output values, as well as how plausible it is that a specific feature will outrank the remaining ones, thus providing a robustness measure. In this way, DecideIT can evaluate the actual proportion of aggregated values for which a feature is considered more favourable than another, that is, whether there is a significantly larger amount of the feasible information (i.e., in the set of rankings provided by the participants where one feature is deemed to provide more value compared to another feature). This can be seen more concretely in Fig. 1, which shows the proportion of feasible information (e.g., Feature 2 is deemed to be more valuable than the rest, Features 3a and 3b are basically equal and more valuable than Feature 4 and the remaining features). See Danielson et al. (2020) for a detailed description of the tool and its underlying theory and Larsson et al. (2018) for details on aggregation across multiple stakeholders/participants. In line with OECD's Recommendation on Digital Government Strategies, the findings from the first co-creation workshop in Tokyo emphasised the need for open, inclusive, accountable, and transparent processes by national governments and highlighted the fact that digital transformation in the public sector, as well as increasing accessibility of the Internet, has exacerbated various problems related to misinformation. Given the importance of factual information for combatting misinformation in the public arena, governments need to collaborate with stakeholders and invest in innovative ways of dealing with misinformation. A number of specific actions were proposed to deal with this societal challenge. Empowerment of citizens, encouraged engagement, education, moderate legislative action, as well as investment in new technologies are invaluable means of tackling misinformation. For fragmented technological and innovative solutions to succeed in tackling misinformation on a broad scale, they need to be integrated and embedded into a co-creational system of policies. More collaborative and effective management of misinformation needs to be supplemented with informed behaviours among citizens. Creating a trusted environment for citizens with adequate education is necessary as we enter an era in which big technological advances have the potential to disrupt even more than they already have. The subsequent workshops took place in three different locations on two separate occasions and provided cross-cultural data for comparing the needs of various stakeholder groups related to decision support models. Data were collected, and the needs of citizens, policymakers, and journalists were identified. Table 1 shows that the need for collaboration and facilitation of tools was identified in all three case countries and by all three stakeholder groups. The need for tools to address education and awareness raising was also identified in all three countries and across all three groups of stakeholders. These tools are also required for sharing reliable information. However, an automatic correction Table 1 Major preferences of various stakeholders regarding how to address misinformation in the three case countries (Sweden, Greece and Austria). Major preference Citizens Policymakers Journalists The main goal of misinformation (political, societal) should be tracked +++ + ++ Internet and social media, as well as the mass production of information, are factors that help the spread of fake news ++ + ++ Need for education and relevant tools +++ ++ +++ Provision of references for validation of information + + + Tools that give indications and extra information on an article are needed + +++ + Tools that help in sharing are needed + + + + Collaboration among all is needed through facilitating tools +++ +++ +++ mechanism for validating information was identified as the least desired option. These preferences were identified during roundtable discussions in the workshops. Discussions followed the same protocol in all three countries: Topics for discussion were provided, but preferences were not identified in advance. The major preferences were identified based on a review of the transcripts of these discussions and on the frequency of mention of certain topics. A rather passive attitude. The results for all groups of stakeholders and all three countries showed interest in the spread of misinformation but revealed a rather passive attitude toward dealing with misinformation. The three most popular answers across all groups of participants were the following: • Feature 2 (Why and when): I want to know why a claim has been flagged as misinformative. And I want to know who flagged it and when. • Feature 3a (How it spreads and by whom): I come across something that I find misinformative. I would like to know how this information has spread online and who has shared it. • Feature 3b (Life cycle [timeline]): I want to know the life cycle (timeline) of a misinformative post/article (e.g., when it was first published, how many fact-checkers have debunked it, and when it was shared again). A general observation was thus that the participants wanted to know who spread the misinformation, why, and how, as well as the timeline of the spread. However, although they wanted to be informed, they did not feel motivated to take further action. This is a rather passive way of dealing with misinformation in general. The participants wanted to be informed when an item was misinformative, but they did not prioritise dealing with the general topic of misinformation, actively reporting and correcting information, or being informed if they themselves were sharing misinformation. 2 Figure 1 aggregates the results from all stakeholder groups in all three countries. The problem is a multi-stakeholder multicriteria decision problem that is evaluated as a multi-linear problem given the background information. This means that the weighted averages of the values of the respective features are evaluated balanced by weights derived from the criteria rankings above (i.e., equations of the format E(F j ) = Σw i v ij , where w i is the weight of criterion i and v ij is the value of feature F j under criterion i). The value E(F j ) is computed by solving successive optimisation problems in the program DecideIT (cf., e.g., Danielson and Ekenberg, 2019). Briefly, the higher the bar for a feature, the better that feature is. The respective portions (blue, light grey and dark grey) show the impact of the respective criteria. Furthermore, the coloured squares show the robustness of the results. Green indicates a significant difference between features, which means that there must be substantial changes in the input data for a feature to change. Orange indicates a difference that is more sensitive to the input data. Black indicates that there is basically no difference between the features. 3 For instance, from Fig. 1, we can see that Feature 2 (Why and when) is deemed more valuable than the rest. We can also see from the green square that this result is quite robust. From the figure, we see that Features 3a and 3b are basically equal (black square) but preferred over Feature 4 (yellow square) and the remaining features. Some answers indicating a more active position, such as Feature 5c (Self-notification): "I want the Co-Inform tools to notify me whenever I repeatedly share misinformation" and Feature 6 (Credibility indicators): "I want to see credibility indicators that I will immediately understand, and I want the credibility indicators to look very familiar, like other indicators online," were ranked at the bottom. Citizens. Table 2 shows the main differences in citizens' attitudes in the case countries. Heterogeneity regarding the best technology features can be observed. In particular, the most preferred option for the citizens from Vienna was as follows: • Feature 6 (Credibility indicators): I want to see credibility indicators that I will immediately understand, and I want the credibility indicators to look very familiar, like other indicators online. This is somewhat surprising, as this feature ranked quite low in the joint results for all three countries and all three groups of stakeholders (see Fig. 1) and was ranked lowest in the Athens workshop. There seemed to be quite a strong polarisation regarding this feature. In contrast, Stockholm and Athens had equal preferences for Features 3b and 4a. • Feature 3b (Life cycle [timeline]): I want to know the life cycle (timeline) of a misinformative post/article (e.g., when it was first published, how many fact-checkers have debunked it, and when it was shared again). • Feature 4a (Sharing over time): I want to be able to quickly understand how much misinformation people have shared over time through an overall misinformation score. Athens and Vienna had the lowest scores on Feature 5a (Instant feedback on arrival): "When I encounter a tweet from someone else that contains misinformative content, I want to be informed that it is misinformative." Fig. 2 aggregates citizens' preferences in the three countries (equally weighted). We can see that Features 3b and 9 are the most (and equally preferred) features when data from the three countries are aggregated, followed by Feature 2. There was thus no clear distinction between the two highest ranked options. Journalists. An overview of journalists' preferences in the respective countries is shown in Table 3. The highest ranked features for the journalist group were as follows: • Feature 2 (Why and when): I want to know why a claim has been flagged as misinformative. And I want to know who flagged it and when. • Feature 3a (How it spreads and by whom): I come across something that I find misinformative. I would like to know how this information has spread online and who has shared it. These two options were also the best choices in Stockholm and Athens and (consequently) corresponded well with the average for all groups and all three case countries (see Fig. 1). Another difference between journalists in Vienna and journalists in the two other case countries was that the highest ranked features in Vienna were the lowest ranked ones in Athens and Stockholm. Figure 3 aggregates journalists' preferences in the three countries (equally weighted). Policymakers. There were no results for policymakers in Stockholm because only one policymaker participated and did not make any choices for the analysis. Table 4 shows the two most prioritised features of the policymakers in Greece and Austria. We can see that the most preferred features were the following: Differences by country. There were some significant differences among the joint stakeholder groups in the different countries. As can be seen from Table 5, the stakeholder groups in Austria preferred credibility indicators. In Greece, the stakeholder groups preferred life cycle (timeline) features. In Sweden, the stakeholder groups preferred the why and when feature, as well as the how it spreads and by whom feature. It was beyond the scope of this study to understand why these differences appeared and how cultural background influenced them. However, it would be interesting for further research to identify the impacts of various cultural factors, such as values, history of participation in each country, and others, on preferences for features of tools for mitigating misinformation in various countries. Therefore, we recommend that developers of tools consider differences in preferences regarding tools that are influenced by cultural background. In sum, there were a variety of preferences for necessary features of tools for mitigating misinformation, and they seemed to be correlated with cultural background rather than with stakeholder group. That is, it seems that cultural context plays a large role in these preferences (and thus the intended specific use of the tools). An important conclusion from this (albeit limited) study is that tools for mitigating misinformation must be flexible enough because it will probably be hard to produce a global (or context-independent) tool that balances all desirable features in such a way as to appeal to a general global public. Conclusions There is a need for comprehensive solutions for designing tools for mitigating misinformation from a value-based software engineering perspective. Here, we demonstrated a framework for evaluating tools for detecting misinformation using a preference elicitation approach, as well as an integrated decision analytic process for evaluating desirable features of systems for combatting misinformation. The framework was tested in workshop settings in Athens, Vienna and Stockholm, where a decision analytic methodology was used to address three interdependent factors: • Elicitation: There are significant difficulties with eliciting preferences, in particular in group decision making and negotiations. We therefore constructed a process based on preference rankings and negotiations, as well as algorithms for aggregating the results. • Evaluation: In general, decision analytic evaluation methods are inflexible in relation to the complex nature of decision problems, and it is usually difficult to aggregate information. Furthermore, there is little or no constructive feedback from evaluation methods. Our process enables more interactive use of group rankings and possibilities to see the effects of conflicting opinions and how they affect the final results. If the results are not robust (because opinions conflict too much), negotiation can begin again until there is agreement or at least clarification where there are essential conflicts. • Communication: A key aspect of exploiting decision analytic support is facilitating communication of the result and of the perspectives of the members of a group. Here we suggest a workshop format with stakeholder representatives. This is, however, not a required format, and the same basic ideas can be used in a more distributed manner, where direct interaction can be combined with the use of questionnaires and feedback mechanisms for respondents. In our case study, the participants prioritised information regarding the actors behind the distribution of misinformation and tracing the life cycle of misinformative posts. The fact that it mattered to participants whether someone intended to delude others indicates the participants' preference for trust, accountability, and quality in, for instance, journalism. Furthermore, the three most valued features across all participants related to the timing and travel of misinformation (when, spread, life cycle), which indicates the significance the participants attributed to the chain of transmission by which a story reached the user. How misinformation travels thus seems to be important for the participants' assessment of the veracity of a claim. With Allport and Postman's (1946) basic law of rumour in mind, participants were interested in shining a light on one of the two critical prerequisites to rumour: They expressed a strong desire to achieve a clearer understanding of what was going on in the case of ambiguous facts or evidence. However, because features requiring active contribution were low ranked, the participants may not have felt personally involved enough in the subject or situation in which there was a need for rebuttal and clarity. The three most valued features can be assessed using the fourquestion truth assessment people undertake when evaluating a statement (Lewandowsky et al., 2012). All three features partially contribute to answering the second question of the assessment: "Is the story coherent?" Participants were interested in monitoring the spread of a story, expressing a desire to fill in gaps that a refutation may have left behind, as one piece of information cannot be assessed in isolation. Features 2 and 3a resort to pinpointing the communicator ("who flagged it," "who has shared it"), attending to the third question: "Is the information from a credible source?" To process the information, participants turned to verifying the communicator's credibility. The fourth question, "Do others believe this information?", is inherent in Feature 3b, in which importance is attributed to how many fact-checkers have debunked a story. The number of fact-checkers may therefore create the perception of a strong consensus, which participants may use to counterbalance an erroneous perceived social consensus. No wireframe responds to the first question, "Is the information compatible with what I believe?", as capturing belief structures was outside the scope of our study. In the third Co-Inform co-creation workshop, we intended to determine the most preferred of the aforementioned 13 features among the stakeholders (citizens, journalists, and policymakers) in the Co-Inform project pilot countries (Greece, Sweden, and Austria). We thoroughly explain stakeholders' preferences for these features in Section 'Results'. However, we did not specifically gather participants' preferences for features of existing tools. During this workshop and discussions with participants, we observed that they desired that the Co-Inform project include the key features of existing misinformation tools. Existing tools, along with feature(s) preferred by the stakeholders, include the following: Rbutr: includes sector-wise repositories of news and community rebuttal and feedback from the community about a news item or a tweet. Foller.me and Botometer: provide generic information about social network users to help users know who shared information. Fakespot: includes assessment/analysis of reviews about a news item or article. NewsGuard: enables users to provide opinions along with supporting material about the authenticity of a news item or a tweet, informs users how reliable a news item or tweet is, and provides credibility statistics about a news item or a tweet. Greek Hoaxes Detector: analyses news items or tweets and assigns them a label so that users can see whether they are misinformative. The participants strongly prioritised these features over those requiring an active contribution to rebutting a story or claim. These observations thus indicate that automated tool support for reliable information detection, tools that support active reasoning, and training in becoming attuned to misinformation strategies (Roozenbeek and van der Linden, 2019) are of high importance. Future studies could further explore the reasons why people engage in a cognitive process of increasing links between nodes for coherence while taking a backseat when tasked with correcting misinformation. This would yield insights into whether and when a push or stimulus may be required when designing novel crowdsourced verification tools. A main observation is that detection tools by themselves cannot combat misinformation; they must be complemented by other means. Automated solutions must work in a context of general societal awareness in combination with the detection mechanisms we investigated in this study. First, as our results indicate, no tool support is likely able to address all user preferences, which is why tools must be complemented by general awareness. There is thus a need to integrate automated systems with broad public information campaigns, including quick tips for citizens on how to conduct research online by themselves on news articles whose content seems to be dubious. Second, our results indicate that journalists take a rather passive approach to detecting misinformation; there is thus a need to increase awareness of the need for more active detection of misinformation. This can be done by, for instance, organising media and news literacy workshops that bring together, inter alia, factcheckers and interested citizens with journalists. Third, because even the most sophisticated automatic tools cannot address the entire range of trust issues and preference structures for evaluating them, there is also a need for reinforced legislation to increase transparency among technology companies concerning the use of data and the origin of information. Fourth, there is a need to create diverse and cross-sectorial teams whose tasks are to spot misinformation and to warn the public by providing clear explanations. Finally, media and news literacy classes should be introduced into the school curriculum in parallel with, for instance, topics on information technology (e.g., Koulolias et al., 2018). Although the first workshop had a global scope, the second and third ones were conducted in Europe. Within Europe, three different geographic locations were chosen (north, middle, south) with cultures that differ in both governance and social media. Although it is impossible to include every country, the sample set constituted good coverage of European countries. Thus, it seems feasible to generalise the findings to Europe as a whole. Because the first workshop focussed more on assessing the effects of misinformation in society, it provided an overview of the scope of the problem in different countries. It did not, however, yield data for classifying the extent of misinformation according to social media maturity, state constitution, or political traditions. Thus, beyond being generalised to Europe, our findings also form a relevant and important basis for conducting similar studies in other parts of the world. Data availability For individual privacy reasons data which were collected from stakeholders elicitations and preferences cannot be made available in the public repository. Received: 21 July 2020; Accepted: 8 December 2020; Notes 1 See, for example, www.coinform.eu. 2 We acknowledge that the preference for the more passive design options is sensitive to the availability of scientific evidence. For example, a passive attitude can be connected to the fact that tools for mitigating misinformation are new to users. Therefore, users might have vague and unclear associations about the features of such tools (Uekermann et al., 2010;Svahn and Lange, 2009). 3 A description of the technical details of the evaluation procedures is beyond the scope of this paper, but a more in-depth explanation of them is provided in Danielson and Ekenberg (2019).
2023-02-23T14:28:51.672Z
2021-01-29T00:00:00.000
{ "year": 2021, "sha1": "dea3d548258fa226bc5bb81361354be013263180", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41599-020-00702-9.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "dea3d548258fa226bc5bb81361354be013263180", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [] }
234505409
pes2o/s2orc
v3-fos-license
Bioaccumulation and Human Health Risk of Heavy Metals from Pesticides in Some Crops Grown in Plateau State, Nigeria† The health risk assessment of heavy metals in food crops fumigated exclusively with pesticides as the source of metal contamination is mostly overlooked. This study determines the concentrations of heavy metals (Cd, Pb, Cr, Cu and Zn) in some food crops fumigated with pesticides and their health risk in human. The mean concentrations of heavy metals in different parts of the studied crops ranged from 0.12–2.03, 1.73– 23.34, 1.60–1150.50, 0.67–19.50, 0.09–6.14-mg/kg for Zn, Pb, Cu, Cr, and Cd respectively. The concentrations of Cd, Pb, and Cr in the investigated crops were above the WHO, (2011) permissible limits and in decreasing trend of Cu > Pb > Cr > Cd > Zn. Bioaccumulation factor (BAF) >1 values for Cd, Pb and Zn and BAF value was maximum for copper (141.75) in Oryza sativa. Pollution indices showed all crops were contaminated with Cd, Pb and Cr and are likely to pose potential health risk to humans. The estimated daily intake from the daily intake of all the studied crops for Cd, Pb had exceeded the USEPA, (2006) oral reference dose daily limit. Hazard Quotient >1 was observed only from the consumption of Ozyza sativa (3.504) for Cu and could likely cause potential health risk in human. Hazard Index indicated health risk through the consumption of Oryza sativa (4.666), Zea mays (1.475), capsicum annuum (1.132) for all the studied metals. Therefore, there is a need for regular screening and monitoring of heavy metals in food crops from pesticides sources. Introduction Pesticides are extensively employed in agriculture to kill pest or unwanted organisms that may reduce crops yield and increase agricultural production (Oyeyiola et al., 2017). Famers in northern Nigeria have depend largely on pesticides for the control of pest, weeds and other diseases (Desalu et al., 2014).This has led to the proliferation in the importation of new pesticides product into Nigeria whose chemical contents are not known or mostly conceal by the manufactures (Barau et al., 2017).The use of pesticides has been on the increase and showed to contain heavy metals (Yuguda, et al., 2015; Barau et al., 2018). However, despite the banned of heavy metals in pesticides globally, recent study have revealed the presence of heavy metals in pesticides at levels above the recommended farmers dilution rate in Europe (Defarge et al., 2018). Soil-plant heavy metal transfer is the main pathway for pollutants to enter human body through food chain (Wang et al., 2004). There is paucity if any on the study on heavy metals contamination of food crops exclusively fumigated with pesticides as the source of heavy metal contamination and their health risk to human. Therefore this study was designed with the aim of determining the concentrations of heavy metals (Cd, Pb, Cr, Cu, Zn) from pesticides in some crops, soil and their associated human health risk in Jos, Plateau State. Materials and Methods Samples of leaves, stems, roots and fruits of tomatoes, pepper, onions, cabbage, carrot, cucumber, spinach, lettuce and maize and corresponding soils, were collected from Naraguta Farm (A) in Plateau State, Nigeria ( N09°58.586, E008°53.820) and Naraguta Farm B(N09°58.562, E008°53.230). Soils collected from some location outside agricultural farms that had no pesticides application were used as control . All samples were collected in a clean brown envelope, labelled and transported to ATBU Biology laboratory and analyzed for Cr, Cu, Cd, Zn, and Pb using an atomic absorption spectrophotometer. Health Risk Assessment ( (Chukwuma, 1994, USEPA, 2006 were determined and results statistically analyzed by SPPS version 8.1. and Two way Analysis of Variance , Heavy Metals in the plants, soil and their factors There was significant variation (p <0.05) in the concentration of heavy metals in different parts of most the studied crops (Tables1 and 2). The trend of heavy metals in the studied crops was in the decreasing trend of Cu > Pb > Cr > Cd > Zn (Tables 1 and 2). Cadmium, chromium, lead concentration in the all the studied crops were above the permissible limits except in Allium cepa (root, leaf, bulb) and Daucas carota (root, stem), Cucumis sativus (fruit), Lactuca sativa (root, leaf) ( Table 3.1a and 3.1b). The concentration of zinc in all the investigated crops were below the permissible limit. Copper was also below the permissible limit except in Cucumis sativus (stem, leaf, fruit,), Zea mays (root, leaf, fruit) and Orzy sativa (root, stem, fruit) (Tables 1 and 2). Discussion and Conclusions The contamination of food crops by heavy metals from pesticides sources are a major concern of food quality safety. The concentrations of Cd, Pb, and Cr in all the studied crops fumigated with pesticides as the only source of contamination have exceeded the WHO, (2011) permissible limits. While the concentration of heavy metals in the corresponding soils of all the studied crops were below the UNEP, (2013) limits for agricultural soils. Most of the studied crops showed BAF > 1 for Cd, Pb, and Zn and BAF was in decreasing order of Cu > Zn > Pb > Cd > Cr. Pollution index indicated that most of the studied crops were contaminated for Pb, Cd, and Cr. The estimated daily intake of metals showed that all the studied crops have exceeded the daily oral reference dose limit and could cause risk to human. Hazard quotient showed all the studied crops were safe for human consumption except Oryza sativa for Cu which may cause risk to human. However, the inhabitants may be experiencing severe adverse health risk (HI) from the consumption of Oryza sativa, Zea mays and Capsicum annuum for all the studied metals. Similar reports relating to this work include but
2021-05-16T00:03:54.529Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "bc10553b8196d2c2d523a16cbc86f9b884f22d49", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2673-9976/4/1/12/pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "5dab725b8aa637eed7e7d3f2761ff52f840073dd", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Environmental Science" ] }
5150902
pes2o/s2orc
v3-fos-license
Impacts of chronic kidney disease and albuminuria on associations between coronary heart disease and its traditional risk factors in type 2 diabetic patients – the Hong Kong diabetes registry Background Glycated haemoglobin (HbA1c), blood pressure and body mass index (BMI) are risk factors for albuminuria, the latter in turn can lead to hyperlipidaemia. We used novel statistical analyses to examine how albuminuria and chronic kidney disease (CKD) may influence the effects of other risk factors on coronary heart disease (CHD). Methods A prospective cohort of 7067 Chinese type 2 diabetic patients without history of CHD enrolled since 1995 were censored on July 30th, 2005. Cox proportional hazard regression with restricted cubic spline was used to auto-select predictors. Hazard ratio plots were used to examine the risk of CHD. Based on these plots, non-linear risk factors were categorised and the categorised variables were refitted into various Cox models in a stepwise manner to confirm the findings. Results Age, male gender, duration of diabetes, spot urinary albumin: creatinine ratio, estimated glomerular filtration rate, total cholesterol (TC), high density lipoprotein cholesterol (HDL-C) and current smoking status were risk factors of CHD. Linear association between TC and CHD was observed only in patients with albuminuria. Although in general, increased HDL-C was associated with decreased risk of CHD, full-range HDL-C was associated with CHD in an A-shaped manner with a zenith at 1.1 mmol/L. Albuminuria and CKD were the main contributors for the paradoxically positive association between HDL-C and CHD for HDL-C values less than 1.1 mmol/L. Conclusion In type 2 diabetes, albuminuria plays a linking role between conventional risk factors and CHD. The onset of CKD changes risk associations between lipids and CHD. Background Coronary heart disease (CHD) is one of the leading causes of premature death [1]. Patients with type 2 diabetes have a 2-4 fold increased risk of CHD compared to those without [2]. The United Kingdom Prospective Diabetes Study (UKPDS) has identified hypertension, hyperglycaemia, high low-density-lipoprotein cholesterol (LDL-C), low high-density-lipoprotein cholesterol (HDL-C) and smoking status as major risk factors of CHD [3]. Recent studies have confirmed that albuminuria is another strong risk factor for cardiovascular disease [4][5][6][7]. This association holds true for albuminuria which occurs early in life [8]. Glycated haemoglobin (HbA 1c ), blood pressure (BP), HDL-C, smoking and body mass index (BMI) are promoters of albuminuria [9]. The latter has been shown to increase the likelihood of high TC and LDL-C levels in a graded fashion [10]. While this relationship may partly explain the increased risk of CHD in patients with CKD [11], in patients with end-stage renal disease (ESRD), most studies point to low HDL-C but not high LDL-C as risk factors for cardiovascular diseases (CVD) [12]. Although there is some evidence suggesting possible linear risk relationships between HbA 1c , BP, LDL-C, HDL-C and CHD [3], the linearity of these associations have never been rigorously examined. In this study, we used a non-linear approach to examine the possible impacts of albuminuria and CKD on conventional risk factors and new onset of CHD in a large prospective cohort of Chinese Type 2 diabetic patients. Subjects The Prince of Wales Hospital is a regional hospital which serves a population of 1.2 million in Hong Kong. The Hong Kong Diabetes Registry was established in 1995 and enrols 30-50 ambulatory diabetic patients each week. The referral sources included general practitioners, community and specialty clinics and patients discharged from hospitals. Enrolled patients with hospital admissions within 6-8 weeks prior to assessment accounted for less than 10% of all referrals. The 4-hour assessment of complications and risk factors was performed on an outpatient basis, modified from the European DIABCARE protocol [13]. Once a diabetic subject had undergone the comprehensive assessment, he/she was considered to have entered this study cohort and would be followed up till death. The study was approved by the Clinical Research Ethics Committee, Chinese University of Hong Kong. The study complied with the Declaration of Helsinki and written informed consent was obtained from all patients. Clinical endpoints including discharge diagnoses of hospital admissions and mortality were censored on 30 th July 2005. Details of hospital admissions were retrieved from the Hong Kong Hospital Authority Central Computer System which records admissions to all public hospitals in Hong Kong (accounting for 95% hospital beds in Hong Kong). These databases were matched by a unique identification number, the Hong Kong Identity Card number, which is compulsory for all residents in Hong Kong. Hospital discharge summaries as coded by the International Classification of Diseases, Ninth Revision (ICD-9), were used to identify first CHD. CHD was defined as (1) nonfatal myocardial infarction (code 410), (2) nonfatal ischemic heart disease (code 411-414) and (3) death due to CHD (not including death due to heart failure). Follow-up time was calculated as the period from enrolment to the first CHD event, death or 30 th July 2005, whichever came first. From 1995 to 2005, 7920 diabetic patients were enrolled in the Registry. Among them, 332 with Type 1 diabetes defined as acute presentation with diabetic ketoacidosis, heavy ketonuria (>3+) or continuous requirement of insulin within 1 year of diagnosis [14], and 5 with uncertain type 1 diabetes status, were excluded from the analysis. Forty-nine were excluded due to non-Chinese or unknown nationality. Four hundred and sixty-seven patients were further excluded for having a past history of CHD (including heart failure) at enrolment. A total of 7067 Chinese type 2 diabetic patients without history of CHD and heart failure at baseline were included in this analysis. Clinical measurements Details of assessment methods, definitions and laboratory assays have been previously described [15,16]. On the day of assessment, patients attended the centre after at least 8 hours of fasting and underwent anthropometric measurements and laboratory investigations. We used the Modification of Diet in Renal Disease (MDRD) re-calibrated for Chinese [17] Statistical analyses The Statistical Analysis System (SAS, Release 9.10) was used to perform the statistical analysis (SAS Institute Inc., Cary, USA). In order to detect any thresholds, Restricted Cubic Spline (RCS) with 4 knots (i.e. 1 term decomposed into 3 terms: x, x 1 and x 2 ) [18] and Cox proportional hazard regression with the stepwise algorithm (p < 0.05 for entry and stay) were used to obtain a group of significant predictors of CHD. The method on how to use RCS in Cox proportionality has been described in detail by Harrell [18]. The detailed algorithm on how to use a stepwise algorithm in spline Cox regression models has been described elsewhere [19]. Candidate variables at enrolment selected by the spline Cox model included systolic/diastolic blood pressure (BP), HbA 1c , BMI, waist circumference, blood haemoglobin (Hb), white blood cells (WBC) count, HDL-C, LDL-C, triglyceride (TG), total cholesterol and drug usage ( Table 1). After several auto-selection cycles [19], spot urine ACR and eGFR were also included in the final model. In exploratory analysis, we calculated hazard ratio (HR) changes over full-ranges of baseline risk factors before and after adjustment for eGFR and ACR, in order to observe the impacts of albuminuria and CKD on these risk associations. Hazard ratio between two points of variable X i can be estimated by exp (y 2 -y 1 ), where y 1 and y 2 are the corresponding RCS function values of two X i points. In this study, the 25 th or 75 th percentile (for near linear relationship) or zenith points (for non linear relationship) of baseline variables were chosen as the reference point (y 1 ) to estimate HR of other points of baseline variable X i (y 2 ). Here, y (y 1 and y 2 ) was the RCS function value of X i , which was calculated by the formula: the spline function value of X i = βx+βx 1 +βx 2 , where β, β 1 and, β 2 were estimated by applying x, x 1 and x 2 as covariates in Cox models. We then categorised significant continuous risk factors identified in the HR plots and used Cox regression analysis to confirm the findings in the risk curve analysis. Proportionals hazards assumption and functional form were checked using Supremum test [20], which is implemented using ASSESS statement in the SAS procedure PROC PHREG. A p-value of <0.05 for two-sided tests was considered to be statistically significant. Study population and predicting models At enrolment, the median age of the cohort was 57 years (interquartile range [IQR]: 46-67 years) with a median disease duration of 5 (IQR: 1-11) years. During a median follow-up period of 5.40 (IQR: 2.87-7.81) years, 351 (4.97%) patients developed incident CHD giving an incidence rate of CHD of 9.28 (95% CI: 8.31-10.24) per 1000 person-years. During the follow-up period, 681 (9.64%) patients died. Of these, 47 deaths were due to fatal CHD (included in the 351 events, CHD as the principal diagnosis). Patients who developed CHD were older, had longer duration of diabetes, more unfavourable lipid profile (LDL-C, HDL-C and TG), worse renal function, higher HbA 1c , urinary ACR and WBC, lower Hb and were more likely to be treated with insulin and antihypertensive drugs at baseline that those who did not ( Table 1). The spline Cox model selected sex, smoking status (current smoker/ex-smoker), use of angiotensin-converting enzyme inhibitors (ACEI)/angiotensin II receptor blockers (ARB) and spline terms of age, duration of diabetes, TC, HDL-C, blood Hb and insulin use at enrolment (Model 1). Blood Hb (p = 0.1709) and insulin use (p = 0.1710) were no longer significant after further inclusion of spline term of eGFR, while all other variables remained significant. Further adjusting for spline term of ACR (p for ACR = 0.0041) did not change the significance of other variables in the model. Risk factors of coronary heart disease Estimated GFR was negatively associated with incident CHD (Figure 1 and Table 2). The HR for CHD started to rise at the trough value of 100 ml/min per 1.73 m 2 , and rapidly from 60 ml/min per 1.73 m 2 downwards. The HR of ACR for CHD increased rapidly from 0 to 150 mg/ mmol before reaching a plateau. Similar trends with ACR was observed in patients with normo, micro and macroalbuminuria and those with ACR ≥150 mg/mmol (p < 0.05 for trend) ( Figure 1 and Table 2). There was near linear association between TC and CHD risk, which was attenuated by adjustment for eGFR and ACR (Figure 2a). Exclusion of patients with CKD led to a higher HR for those with high TC >5.0 mmol/L. In patients without CKD, the HR of TC for CHD started to increase linearly from 5.0 mmol/L upwards. In patients with CKD, the shape of the risk curve was changed to one of "A-shaped" with a peak HR at 5.0 mmol/L ( Figure 2b). In patients with normoalbuminuria, there was no significant association between TC and CHD ( Figure 2c). Conversely, in patients with albuminuria, there was a linear relationship between TC and CHD risk. HDL-C was associated with CHD in an A-shaped manner with a zenith at 1.1 mmol/L and a long tail on the right (Figure 3a). Both HDL-C ≥1.40 mmol/L and HDL-C <0.80 mmol/L were associated with reduced risk of CHD (Table 3b), which remained significant after adjusting for eGFR and ACR. The gradient of the HR curve accelerated more rapidly from very low level of HDL-C up to 1.1 mmol/L in the albuminuric group than in the non-albuminuric group. After excluding patients with CKD (n = 690), the negative risk association between CHD and HDL-C was significant for HDL-C ≥1.40 mmol/L (p < 0.001) but not for HDL-C level <0.80 mmol/L, p = 0.127). Blood Hb was associated with CHD risk in a linear manner ( Figure 4). Excluding patients with CKD changed the shape of the HR curve with a shoulder value at 12.5 g/dL. Adjusting for eGFR also rendered the HR non significant for Hb <12.5 g/dL versus. ≥12.5 g/dL ( Table 2). Risk of CHD increased with disease duration during the first 13 years, which then maintained at a high level ( Figure 4 and Table 2). Old age, male gender, current smokers, use of ACEI/ARB and use of insulin were also associated with higher risk of CHD ( Figure 4 and Table 2). Discussions Our study re-affirms previous observations that age, male gender, tobacco intake, long disease duration, high TC, low HDL-C, high ACR, and low eGFR were independent risk factors of CHD using conventional Cox regression analysis. The novelty of our analysis lies in its ability to demonstrate the powerful effects of albuminuria and CKD Full range risk associations between CHD and eGFR/ACR Figure 1 Full range risk associations between CHD and eGFR/ACR. a. Black: adjusted for model 1 variables (p < 0.05); Blue: further adjusted for ACR (p < 0.05). Model one variables include age, sex, and smoking status (current/ex), total cholesterol, HDL-C, Hb, eGFR and use of ACEI/ARB as well as use of insulin at enrolment. The hazard ratio was calculated using the 25 th percentiles, 75 th percentiles as the reference level. b. Black: adjusted for model 1 variables (p < 0.05); Blue: further adjusted for eGFR (p < 0.05). on modifying these risk relationships as evidenced by changes in the HR plots of these risk factors. In particular, the risk association between blood Hb and CHD was entirely explained by eGFR and ACR while albuminuria and CKD had profound effects on the CHD risk association with HDL-C and TC. Lipid parameters The UKPDS reported graded increase in CHD risk with LDL-C in type 2 diabetes [3]. In our model, instead of LDL-C, TC was selected as a risk factor of CHD. More detailed analysis revealed complex interplay between lipid parameters, albuminuria, CKD and CHD risk in this large prospective cohort of type 2 diabetic patients with a broad range of renal function and albuminuria. While we observed a near linear relationship between TC and CHD, once ACR was fitted into the risk curve, this relationship was present only in patients with albuminuria. These findings concur with that from a large scale cross-sectional study in USA (n = 17,702) which showed risk association between albuminuria and hypercholesterolemia (TC and LDL-C) in a graded fashion [10]. On the other hand, the linear association between CHD and TC at ≥5.0 mmol/L was mainly observed in patients without CKD. Using the non-linear approach, we further demonstrated that high HDL-C was associated with low CHD risk when HDL-C was ≥1.1 mmol/L. However, for levels lower than 1.1 mmol/L, the presence of albuminuria and, in particular, CKD markedly changed the shape of the risk curve to an 'A shaped' with a paradoxically positive association between CHD risk and HDL-C level, giving rise to a zenith value of 1.10 mmol/L. These non-linear relation- Full range risk associations between total cholesterol and CHD before and after adjustment for eGFR and ACR Full range risk associations between HDL-C and CHD before and after adjustment for eGFR and ACR Figure 3 Full range risk associations between HDL-C and CHD before and after adjustment for eGFR and ACR. a. Black: derived from model 1 (p < 0.05); Blue: further adjusted for eGFR (p < 0.05); Red: further adjusted for eGFR and ACR (p < 0.05); Cyan: limited to eGFR ≥60 ml/min per 1.73 m 2 in model 1 (p < 0.05). Model one variables include age, sex, and smoking status (current/ex), total cholesterol, HDL-C, Hb, eGFR and use of ACEI/ARB as well as use of insulin at enrolment. The hazard ratio was calculated using the zenith as the reference level. b. Black: adjusted curve in patients with eGFR ≥60 ml/min per 1.73 m 2 (p < 0.05); Blue: adjusted curve in patients with eGFR <60 ml/min per 1.73 m 2 (p < 0.05). c. Black: adjusted curve in patients without albuminuria (p < 0.05); Blue: adjusted curve in patients with albuminuria (p < 0.05). Full range risk associations between CHD and Hb/duration of diabetes/age before and after adjustment for eGFR and ACR Figure 4 Full range risk associations between CHD and Hb/duration of diabetes/age before and after adjustment for eGFR and ACR. a. Black: adjusted for model 1 variables (p < 0.05); Blue: further adjusted for eGFR (p: NS). Red: further adjusted for eGFR and ACR (p: NS); Cyan: limited to eGFR ≥60 ml/min per 1.73 m 2 in model 1 (p < 0.05). Model one variables include age, sex, and smoking status (current/ex), total cholesterol, HDL-C, Hb, eGFR and use of ACEI/ARB as well as use of insulin at enrolment. The hazard ratio was calculated using the 25 th percentiles, 75 th percentiles as the reference level. b. Black: adjusted for model 1 variables (p < 0.05); Blue: further adjusted for eGFR (p < 0.05). Red: further adjusted for eGFR and ACR (p < 0.05); Cyan: limited to eGFR ≥60 ml/min per 1.73 m 2 in model 1 (p < 0.05). c. Black: adjusted for model 1 variables (p < 0.05); Blue: further adjusted for eGFR (p < 0.05). Red: further adjusted for eGFR and ACR (p < 0.05); Cyan: limited to eGFR ≥60 ml/min per 1.73 m 2 in model 1 (p < 0.05) ships were further confirmed by conventional Cox regression analysis. Using 0.80-1.39 mmol/L as the referent, HDL-C <0.80 mmol/L and HDL-C ≥1.4 mmol/L were both associated with reduced risk of CHD. The nature of this positive association between HDL-C and CHD for HDL-C values less than 1.1 mmol/L, observed mainly in patients with CKD or albuminuria requires further elucidation. However, in light of the potent anti-inflammatory and anti-oxidant properties of HDL-C particles [21,22], we postulate that these associations may be due to changes in the metabolic milieu associated with CKD and severe albuminuria [23,24]. Against these thought-provoking findings, it is noteworthy that in the recent 4D study, treatment with atorvastatin failed to reduce CHD risks in patients with ESRD [25]. These findings are not unexpected given the lack of association between LDL-C and CHD in patients with ESRD in epidemiological studies as well as the non-association between TC and CHD risk in our patients with CKD [12]. Furthermore, two recent clinical trials failed to confirm the hypothesis that increasing HDL-C can reduce the progression of coronary atherosclerosis [26,27]. Again, our findings regarding the powerful effects of albuminuria and CKD on altering the pattern of risk association between HDL-C and CHD highlight the complexity of interrelationships between energy metabolism and organ function. Blood haemoglobin, renal impairment and albuminuria Our group and others have reported the risk association of CHD with low eGFR [11,28]. In our current analysis, adjustment for ACR greatly attenuated the association between eGFR and CHD risk, suggesting that the risk association was in part mediated by albuminuria, a marker of endothelial dysfunction. In this cohort, we detected a sharp and linear association between CHD risk and ACR starting from 0 to 150 mg/mmol. This observation therefore concord with findings by Gerstein et al showing that any degree of albuminuria is a risk factor for cardiovascular disease [29]. There is strong evidence showing that low blood Hb is a strong predictor for CHD [30,31]. In our analysis, the risk association between CHD risk and blood Hb was rendered non significant after adjustment for eGFR and exclusion of patients with CKD. These findings suggest that blood Hb may merely serve as a surrogate marker for CKD and thus may explain the negative results of two recent clinical trials which failed to confirm the beneficial effects of correction of anaemia using erythropoietin therapy on cardiovascular endpoints in patients with ESRD [32,33]. Other CHD risk factors In patients with diabetes less than 13 years, there was linear relationship between CHD risk and disease duration. In patients with disease more than 13 years, the statistical significance of disease duration disappeared. This may be confounded by the strong relationship between disease duration and albuminuria and that between albuminuria and hypercholesterolaemia [9,10]. Age is a well-known risk factor of CHD [3]. However, our study suggests that this age-associated CHD risk was in part mediated by loss of renal function after the age of 55 years. In agreement with the UKPDS [3], we also found a risk-protecting effect of female gender on CHD. Smoking is a well-known risk factor of CHD which is also independently associated with CHD in our cohort [3]. Although there is strong epidemiological evidence supporting the risk association between CHD and glycemic control [3,34], the UKPDS failed to confirm the benefits of improving glycemia on CHD rates in an interventional setting [35]. In our cohort, HbA 1c was a significant predictor for CHD with a HR of 1.07 for every 1% increase in HbA 1c (p = 0.0136) after controlling for age, sex, SBP and smoking status. However, this significance was rendered non significant once ACR, TC, HDL-C, or disease duration were adjusted for. Other studies have shown that improvement in glycemic control reduced albuminuria and hypercholesterolaemia [35][36][37]. Taken together, with the possible causal effect of albuminuria on hypercholesterolaemia [10], our findings suggest that the effect of HbA 1c on CHD risk is likely to be mediated through risk factors such as albuminuria and lipids. Blood pressure is a strong risk factor for CHD in type 2 diabetes [3]. In our analysis, the age and sex adjusted hazard ratio of SBP for CHD was 1.23 (95% CI: 1.07-1.18) per 10 mmHg (p < 0.001). However, after adjusting for the spline term of ACR, the significance of SBP did not persist (p = 0.172). Removal of the use of ACEI/ARB in the spline Cox model (without ACR and eGFR), the spline term of SBP was significant (p = 0.007). These findings suggest that low BP and use of ACEI/ARB were associated with reduced risk of CHD, largely mediated by albuminuria. Similar to the findings from the UKPDS [3], BMI and waist circumference were not selected as risk factors of CHD in the model. The age and sex adjusted HRs were also not significant (p = 0.494 for BMI and 0.182 for waist circumference). Although BMI has been implicated in albuminuria [9], the association between BMI and CHD may be confounded by other mediators such as dyslipidemia and inflammation. Besides, the prognostic signifi-cance of BMI in the presence of co-morbidities such as diabetes may become paradoxically reversed [38]. Limitations This prospective cohort consists of a heterogeneous cohort of type 2 diabetic patients with a wide range of disease duration and risk factors. Although this heterogeneity and the use of single baseline values may theoretically reduce the precision of these risk estimations, this draw back was partly compensated by the relatively large number of clinical events, detailed phenotyping at baseline and long period of observation. Overall, results generated from both conventional and non-linear approaches are robust and consistent which have generated alternative hypotheses which are biologically plausible. Further clinical and experimental studies are required to confirm these findings. Conclusions and Implications Using a large prospective database and relatively novel and robust statistical methods, we have found a strong linear association between TC and CHD only in patients with albuminuria. Adjusting for eGFR and albuminuria attenuated the associations between lipid, Hb, BP, duration of diabetes and CHD, suggesting that albuminuria plays a linking role between these risk factors and CHD. The onset of CKD further changes risk associations between lipids (such as TC and HDL-C) and CHD. Recently, several major randomised clinical trials have yielded negative results regarding the effects of correcting anemia and reducing LDL-C on cardiovascular outcomes in patients with ESRD as well as that of raising HDL-C on reducing progression of atherosclerosis. Based on these observations, we infer the following pathways to CHD in type 2 diabetes: 1). Hyperglycaemia and hypertension lead to albuminuria, a marker of endothelial and renal damage; 2). Albuminuria leads to hyperlipidaemia which further increases the risk of CHD; and 3). Albuminuria, both as a surrogate for multiple risk factors and causal factors, leads to deterioration of renal function and 5). Reduced renal function further changes the pattern of risk association between HDL-C and CHD, i.e., the predictive value of very low HDL-C (<0.8 mmol/L) no longer holds when CKD has developed. Understanding the complex relationships among risk factors of CHD in type 2 diabetes is an important step towards further reducing CHD risk in type 2 diabetes. For example, reducing albuminuria might further control hyperlipidaemia and enhance the benefits of controlling traditional risk factors such HbA 1c , BP and LDL-C. Our data also suggest that retarding rate of deterioration of renal function and correcting anaemia may have important cardioprotective effects. However, these hypotheses will need to be confirmed by both experimental and interventional studies. Competing interests The author(s) declare that they have no competing interests.
2019-03-10T13:04:12.680Z
2007-12-02T00:00:00.000
{ "year": 2007, "sha1": "d9d7be9b98696304eecf1c5dafeedd3f63458057", "oa_license": "CCBY", "oa_url": "https://cardiab.biomedcentral.com/track/pdf/10.1186/1475-2840-6-37", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "74012d373c2d6e03fdc1035a150cc4893e367e7c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
220835186
pes2o/s2orc
v3-fos-license
Impact of COVID-19 pandemic on patients with Fabry disease: An Italian experience We conducted an observational study to assess the impact of COVID-19 emergency on management and outcomes of patients with Fabry disease referring to our Center in Naples, Italy. No patient of the 129 included reported suspected symptoms; 3 isolated themselves in auto-quarantine for flu-like symptoms. All treated patients regularly continued their therapies; 8 missed one infusion: 3 for self-isolation with 2 relatives, and 3 refused to receive nurse at home. All elective procedures were deferred and telemedicine was adopted. TEXT Coronavirus disease 2019 (COVID-19), a severe acute respiratory syndrome with a high mortality rate caused by the SARS-CoV-2, was labelled as a global pandemic by the World Health Organization in March 2020. Italy is one of the most severely affected countries: from the beginning of the emergency until 11th May 2020, the Italian Ministry of Health have reported 220,000 confirmed cases and 30,000 deaths [www.salute.gov.it]; therefore, the landscape of the Italy' healthcare system has been rapidly and dramatically altered. In particular, a main challenges in this pandemic is represented by balancing patient care needs with limited resources. Therefore, to face the changing resource allocation during the COVID-19 pandemic, several strategies have been implemented to minimize interruption of care and treatment. Patients with underlying chronic multisystemic disorders, like Fabry disease (FD), are considered at greater risk of COVID-19 infection and more likely to have higher morbidity and mortality [1]. FD is an X-linked disorder caused by lysosomal α-galactosidase A (α-Gal) deficiency, with subsequent deposition of undegraded glycosphingolipid products, mainly globotriaosylceramide (Gb3) and globotriaosylsphingosine (lyso-Gb3), in multiple organs, with significant morbidity and premature death [2]. To date, the treatment options for this genetic disease include intravenous (i.v.) infusion of enzyme replacement therapy (ERT) with agalsidase alfa or agalsidase beta every other week, and oral therapy with the pharmacological chaperone migalastat [3]. Moreover, clinical trials to evaluate the efficacy of pegunigalsidase alfa, a pegylated dimerized version of agalsidase alfa, infused at two different doses either every other week or monthly, are currently ongoing [3]. The Fabry Center of Federico II University of Naples is one of the main referral centers for FD in Italy, with more than 150 patients, performing about 500 annual outpatient clinic visits. The rapid spread of COVID-19 combined with the consequent complete global lockdown, required a number of changes in our FD center organization to avoid unnecessary exposure of staff and patients to J o u r n a l P r e -p r o o f Journal Pre-proof infection, while still continuing to provide care and support to our patients. Therefore, we conducted an observational study with the aim to assess the impact of COVID-19 emergency on clinical management and outcomes of patients with FD. All FD patients referring to our center were contacted by phone by physicians, to collect data about their health status and to organize their follow-up. A total of 129 patients were included, 60 males (46.5%) and 69 females (53.5%), mean age 47.5±15.9 years. No patient reported either symptoms suspected for COVID-19, or a direct contact with a known positive case; therefore, no patients was specifically tested with nasal swab. Only 3 patients presenting fever <37.5° C and flu-like symptoms, isolated themselves in auto-quarantine for 14 days, with no further investigation. New enrolment into clinical trials has been paused during the COVID-19 crisis, but FD patients enrolled in therapeutic clinical trials continued the study drug. Specifically, for these patients, we organized home therapy services to maintain necessary infusion schedule while minimizing social contact. Psychological support services were proposed to all patients, and 3 patients (2.3%) contacted the psychologist. Moreover, to promote physical distancing and in anticipation of increased workload due to a pandemic, elective treatments and procedures, routine laboratory testing and non-urgent outpatient clinics were deferred. Telemedicine, including both video and telephone-only contacts, is emerging as an important tool for maintaining outpatient care while limiting direct patient contact [4]. When clinically appropriate, telemedicine was adopted to care for ambulatory patients. No programmed visit resulted undefferable, so each patient with a scheduled visit was contacted 24 to 48 hours before their appointment by a clinic coordinator to confirm their visit date and time and to explain that it would be conducted in the comfort of their own homes with the use of our telehealth facilities. Then, medical staff called patients on the day and time of their clinic appointments. Patients were asked to take their temperature, pulse, weight and blood pressure. If necessary, visual examinations were performed, including evaluation for respiratory distress and oedema, using platforms such as FaceTime, WhatsApp, and Zoom to expedite this process. For patients needing laboratory surveillance, we limited it focusing on laboratory tests easily drawn at not a hospital-based laboratory. Moreover, we suggested the use of urine dipsticks for home monitoring of proteinuria. Finally, patients' medications and prescriptions were evaluated and adjusted. From our survey data, among 129 interviewed patients, no one was infected. However, since no specific tests (oropharyngeal swab or serum antibodies) were performed, asymptomatic or presymptomatic cases cannot be excluded among our cohort. The reason of the absence of infection in our FD population could be the particular attention of this category of patients in respecting measures of hygiene and infection prevention. A further explanation could be the safe organization of home-therapy, that seems to be the most efficient way to maintain therapy access during a pandemic, obviously monitoring the involved personal and guaranteeing the correct use of personal protective equipement. In fact, the only experience reported in the literature on the impact of COVID-19 on therapies for lysosomal storage disorders (LSD), showed that 49% of patients receiving ERT in the hospital experienced treatment disruption [5]. At present, no official indication exist on the management of FD patients during emergency and post-emergency period. Therefore, the analysis of the COVID-19 pandemic effects on medical care and health status of patients with FD will be useful to delineate consensus guidance. J o u r n a l P r e -p r o o f
2020-07-29T13:38:38.126Z
2020-07-28T00:00:00.000
{ "year": 2020, "sha1": "887a7aacab6f3b917154dc6f49bb00365a9a012a", "oa_license": null, "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7386205", "oa_status": "GREEN", "pdf_src": "ElsevierCorona", "pdf_hash": "887a7aacab6f3b917154dc6f49bb00365a9a012a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
219906629
pes2o/s2orc
v3-fos-license
TsunamisGenerated by SubmergedLandslides: Numerical Analysis of the Near‐Field Wave Characteristics The accurate modeling of the landslide‐generated tsunami characteristics in the so‐called near‐ field is crucial for many practical applications. In this paper, we present a new full‐3‐D numerical method for modeling tsunamis generated by rigid and impermeable landslides in OpenFOAM® based on the overset mesh technique. The approach has been successfully validated through the numerical reproduction of past experiments for landslide‐generated tsunamis triggered by a rigid and impermeable wedge at a sloping coast. The method has been applied to perform a detailed numerical study of the near‐field wave features induced by submerged landslides. A parametric analysis has been carried out to explore the importance of the landslide's initial acceleration, directly related to the landslide‐triggering mechanisms, on the tsunami generation process and on the related wave properties. Near‐field analysis of the numerical results confirms that the influence of the initial acceleration on the tsunami wave properties is significant, affecting wave height, wave period, and wave celerity. Furthermore, it is found that the tsunami generation mechanism experiences a saturation effect for increasing landslide's initial acceleration, confirming and extending previous studies. Moreover, the resulting extended database, composed of previous experimental data and new numerical ones, spanning a wider range of governing parameters, has been represented in the form of a “nondimensional wavemaker curve,” and a new relationship for predicting the wave properties in the near‐field as a function of the Hammack number is proposed. Introduction Impulsive waves (i.e., tsunamis) can be generated by sudden displacements of volumes of water induced by earthquakes, landslides, volcanic eruptions, impacts of asteroids, and gradients of atmospheric pressure (Løvholt et al., 2015). Among these triggering mechanisms, landslides assume a relevant role, especially as far as confined geometries are concerned (e.g., bays, reservoirs, lakes, and fjords). Figure 1 shows two examples (left panel: Lituya Bay, Alaska; right panel: Stromboli Island, Italy) of areas prone to landslide tsunami hazard. The interest in landslide-generated tsunamis in proximity of the coast has risen in the last decades due to some devastating events, such as those in Lituya Bay in 1958 (Alaska, Fritz et al., 2009), in the Vajont Valley in 1963 (Italy, Panizzo et al., 2005), in Papua New Guinea in 1998 (Synolakis et al., 2002), in Stromboli in 2002(Italy, Tinti et al., 2005, in Haiti in 2010 , and recently in Indonesia in 2018 (Grilli et al., 2019). The physical process at hand is generally characterized by different length and time scales than those of tsunamis generated by earthquakes. The triggering mechanism, i.e., the landslide, can be classified as subaerial, partially submerged, or completely submerged, depending on the initial landslide position (McFall & Fritz, 2016). When the landslide occurs directly at the water body boundary, the generated impulse waves radiate seaward and propagate alongshore. Since tsunami generation is likely to occur in shallow water regions, the interaction between the waves and the sloping sea bottom plays a relevant role. The waves can be refracted by the interaction with the bottom, and trapping mechanisms, like those typical for edge waves, can occur Romano et al., 2013). The complex interaction that exists between the generation and propagation mechanisms is therefore to be carefully considered for a proper understanding of the generated wave features and, consequently, for developing effective tsunami early warning systems working in real time (e.g., Bellotti et al., 2009;Cecioni et al., 2011). Experimental tests are often time and money consuming, especially if 3-D models are considered. Large facilities, as well as complex experimental configurations and sophisticated measurement systems, are often needed (see McFall & Fritz, 2016;Romano et al., 2016). Furthermore, it is not always possible to explore in detail the influence of all the involved parameters. In this sense, tsunamis generated by submerged landslides provide a good example. Often, the waves generated by submerged landslides are too small to get reliable measurements in the experimental facilities. Moreover, it can be difficult to explore the influence of a key governing parameter such as the initial acceleration a 0 . This parameter is commonly recognized to be a crucial one in the slide kinematics, in particular in the initial phase, when the energy transfer between the landslide and the water takes place. Indeed, a 0 is directly related to the triggering mechanisms of the landslide and governs the key parameters of the tsunami source (Enet & Grilli, 2007;Grilli et al., 2009;Kim et al., 2019;Løvholt et al., 2015;Najafi-Jilani & Ataie-Ashtiani, 2008;Romano et al., 2017;Watts, 1998;Watts et al., 2005). Several experimental studies explored the importance of a 0 by means of different techniques. Watts (1998) changed the landslide's density to obtain different values of a 0 . More recently, Romano et al. (2017) used a mechanical system controlled by an electric motor to change the kinematics of the landslide. Nevertheless, physical restrictions are inevitable, hindering the exploration of a wider range of cases (i.e., different landslide-triggering mechanisms). The most recently developed tools offered by computational fluid dynamics (CFD) can provide a significant support for shedding light on many of the unresolved aspects. In particular, they can be very useful to model the near-field wave characteristics induced by submerged landslides, exploring the influence of the landslide-triggering mechanisms in terms of the generated waves. Indeed, the accurate reproduction of the momentum exchange between the landslide and the water body, guaranteed by the CFD methods, is crucial for a detailed modeling of tsunami generation, propagation, and the interaction with the coastline. In this paper, a numerical study of the near-field wave characteristics of tsunamis generated by submerged landslides is presented. To this end, we used a new method for numerically modeling tsunamis generated by rigid and impermeable submerged landslides with OpenFOAM® (v1812) by using an approach based on the overset mesh technique. The overset mesh method is newly introduced in the coastal engineering field. This technique has been successfully used to model the dynamics of floating bodies under the effects of waves and currents (e.g., Di Paolo et al., 2018) and other hydrodynamics problems (Chen, Qian, et al., 2019). The overset mesh is based on the use of two (or more) domains. The outer one (i.e., background domain) allows the motion of one, or more, inner domain(s) (i.e., moving domain) that contains a rigid body. The mutual exchange of information between the two domains is achieved by interpolation. The advantage of this approach, if compared with other methods available to simulate the interaction between a moving body and one or more fluids in OpenFOAM®, for example, the immersed boundary method (Chen, Heller, et al., 2019;Jasak et al., 2014), is that the resolution around the moving body is extremely accurate (i.e., body-fitted approach) and, which is even more important, remains constant throughout the simulation. The modeling of the solid boundaries on which the landslide moves is another point of novelty of the present approach. The numerical reproduction of a body moving close to an impermeable surface is not possible by using the overset mesh method because of the mentioned interpolation procedure. Indeed, few computational cells are needed between the body and the domain's edges. In order to overcome this requirement of the method, the solid bodies, on which the landslide body moves, are modeled as a porous media with a very low permeability by using the VARANS approach proposed by del Jesus et al. (2012), Lara et al. (2012), and Losada et al. (2016). In order to validate the proposed approach against experimental data, the numerical reproduction of the experimental benchmark described by Liu et al. (2005) for landslide-generated tsunamis triggered by a rigid and impermeable wedge at a sloping coast has been carried out. The validated numerical method has further been applied to a detailed study of the near-field wave features induced by submerged landslides. Parametric simulations, by varying the initial acceleration a 0 , have been carried out to explore the importance of this parameter on the tsunami generation process and on the related wave characteristics. The quantitative spatial analysis carried out in the near-field points out the significant influence of the initial acceleration on the tsunami wave features, also showing that "saturation" mechanisms (i.e., no more energy can be effectively transferred from the landslide to the water to generate larger waves) may occur for increasing values of the initial acceleration, confirming and extending the previous theoretical findings of Tinti and Bortolucci (2000). Furthermore, the new numerical results have been represented, together with experimental data from past works dealing with different landslide geometries and configurations, in the form of a "nondimensional wavemaker curve" (Watts, 1998). The excellent agreement between the previous experimental data and the numerical ones, involving wide parameter ranges, allowed to propose a new relationship for predicting the wave characteristics in the near field, induced by rigid and impermeable submerged landslides, as a function of the Hammack number. The paper is structured as follows. After this introduction, the description of the numerical method is provided in section 2, while the validation of the method itself against experimental results is given in section 3. Sections 4 and 5 show the application of the proposed approach to investigate the features of the tsunami wave pattern induced by landslide-generated tsunamis in the near field. Finally, section 6with the concluding remarks closes the paper. Numerical Model The new approach for numerically modeling tsunamis generated by rigid and impermeable landslides, described in this paper, has been developed on the OpenFOAM® platform (Jasak, 1996). IHFOAM (Higuera et al., 2013a(Higuera et al., , 2013b, based on interFoam of OpenFOAM®, includes wave boundary conditions and porous media solvers for coastal and offshore engineering applications and can solve both 3-D Reynolds-Averaged Navier-Stokes equations (RANS) and Volume-Averaged RANS equations (VARANS) for two phase flows. In the present work both RANS and VARANS equations have been used, and solved, coupled to the Volume of Fluid (VOF) equation and to the overset mesh method. In this section the base equations as well as a description of the proposed method are presented. Governing Equations The RANS equations, which allow to model the flow at the clear fluid region, are based on the Reynolds decomposition, which identifies an average and a fluctuating component (i.e., velocity and pressure fields for incompressible models). These equations are represented by the mass and momentum conservation equations, coupled to the VOF equation as follows: where u i (m/s) are the ensemble averaged components of the velocity, x i (m) the Cartesian coordinates, g j (m/s 2 ) the components of the gravitational acceleration, ρ (kg/m 3 ) the density of the fluid, p * the ensemble averaged pressure in excess of hydrostatic, defined as p * = p−ρg j x j (Pa), being p the total pressure, α (-) the volume fraction (VOF indicator function), which is assumed to be 1 for the water phase and 0 for the air phase, and f σi (N/m 3 ) the surface tension, defined as f σi ¼ σκ ∂α ∂x i , where σ (N/m) is the surface tension constant and κ (1/m) the curvature (Brackbill et al., 1992). μ eff (Pa · s) is the effective dynamic viscosity that is defined as μ eff = μ+ρν t and takes into account the dynamic molecular (μ) and the turbulent viscosity effects (ρν t ); ν t (m 2 /s) is the eddy viscosity, which is provided by the turbulence closure model. Finally, the compression velocity u ci (m/s) is calculated as u ci ¼ min½c α ju i j; maxðju i jÞ , where the compression coefficient c α (−) is assumed to be 1 (Marschall et al., 2012;Weller, 2008). The VARANS equations allow to model the flow inside an eventual porous material, which is modeled as a continuous media. As shown in the following, additional terms are considered in the momentum equation to account for frictional forces exerted by the porous media. The mass and the momentum conservation equations, coupled to the VOF equation, read as follows: where ū i (m/s) are the volume averaged ensemble averaged velocity (or Darcy velocity) components, defined Engelund (1953), modified by Van Gent (1995), the expressions for A, B, and c are as follows: where D 50 (m) is the mean nominal diameter of the porous material, KC (−) the Keulegan-Carpenter number, a (−) and b (−) are empirical nondimensional coefficients (see Lara et al., 2011;Losada et al., 2016) and γ = 0.34 (−) is a nondimensional parameter as proposed by Van Gent (1995). These equations have been implemented in a solver within the OpenFOAM® framework by Higuera et al. (2014aHiguera et al. ( , 2014b. The solver works as follows: At the clear fluid region (i.e., outside the porous region) the frictional forces exerted by the porous media are deleted (i.e., a = b = c = 0) and n = 1; thus, the VARANS are replaced by the RANS; inside the porous region the empirical coefficients, the parameters and the porosity related to the porous media (i.e., a, b, c, D 50 , KC, and n) are defined; thus, the full set of VARANS is solved. It should be noted that the solver supports several turbulence models (e.g., two equation models, k−ε, k−ω, and k−ω−SST). In this study, the k−ε turbulence model has been used. Finally, it is worth noticing that the present formulation of the VARANS equations accounts for the spatial variation of the porous media proporties (porosity gradient), thus differing from that proposed by Jensen et al. (2014). More details on the VARANS equations can be found in del Jesus et al. (2012), Lara et al. (2012), and Losada et al. (2016), while for a more thorough description of their implementation in OpenFOAM® we refer to Higuera et al. (2014a). In conclusion, the VARANS equations become RANS equations in a fluid region outside the porous medium when porosity becomes 1. Within the porous medium, porosity takes a value lower than 1, and then additional terms are activated in momentum equation to include, for example, the frictional forces induced by the porous medium or a decrement of mass and linear momentum. Overset Mesh Method In this subsection a brief description of the overset mesh framework (also known as Chimera or overlapping grids technique) is provided. To the knowledge of the authors, there are very few works that applied this promising technique for coastal and offshore engineering applications. This technique has mainly been used to simulate the dynamics of floating objects and the so-called water entry problem (e.g., Ma et al., 2018;Windt et al., 2018). More recently, Di Paolo et al. (2018) have applied this mesh technique to simulate the dynamics of floating bodies under the effects of waves and currents, while Chen, Qian, et al. (2019) have applied the Overset mesh method to reproduce a numerical wave tank for modeling free-surface hydrodynamic problems (e.g., water entry problem and dynamics of floating objects). The overset mesh method is based on the use of two (or more) domains. The outer one (i.e., background domain) allows the motion of one, or more, inner domain(s) (i.e., moving domain) that contains a rigid body. Therefore, the two domains, which overlap each other, can be used to simulate a large variety of hydrodynamics applications, especially if large displacements are considered. The left panel of Figure 2 shows a sketch depicting the features of the method. The background domain and the moving one (blue hatching) are represented. Within the latter the rigid body (i.e., the landslide) is modeled as a rigid and impermeable object (red hatching). Furthermore, the same panel schematically represents the characteristics of the interaction between the two domains. The moving domain, containing the object, can move through the background one with six degrees of freedom. As stated, the two-way exchange of information between the background mesh and the moving one, to preserve continuity in the conservation of mass and linear momentum equations, requires interpolation of different scalar (pressure, density, turbulent kinetic energy, etc.) and vector fields (fluid velocity). The method followed to interpolate those magnitudes is the inverse distance method. Conversely to other techniques (e.g., immersed boundary method or deforming mesh), this method offers the advantage that the resolution around the moving body is extremely accurate (i.e., body-fitted approach) and remains constant throughout the simulation. Thus, the strength of the overset mesh method lies in its ability to represent complex geometries while maintaining a good quality mesh, especially for large amplitude body motions (Chen, Qian, et al., 2019;Ma et al., 2018). This aspect is important for the momentum exchange between a rigid body and the water. The New Approach for Landslide-Generated Tsunamis All the features described in the sections 2.1 and 2.2 have been coupled for developing a numerical tool able to model tsunamis generated by rigid and impermeable landslides at a sloping coast. Indeed, although the overset mesh method seems to be suitable to address this task, the numerical modeling of a body, which is moving in contact with a solid and impermeable boundary (i.e., a sloping coast), is not possible yet because of the interpolation, on which the implementation is based. Indeed, few computational cells are needed between the body and the domain's boundaries. Obviously, this requirement of the overset mesh method does not affect the hydrodynamics modeling of floating bodies that are placed in the inner part of the numerical domain (i.e., far from the domain's boundaries), as shown in the cited works. Nevertheless, as far as landslide-generated tsunamis occurring at a sloping coast are concerned, for which the momentum transfer between the landslide and the water takes place during the sliding of the body along the inclined surface, this numerical implementation seems to be no longer an option. In order to overcome this requirement of the overset mesh method we used an innovative approach to model the sloping coast along which the landslide moves. We modeled the sloping coast as a porous media characterized by a very low permeability (i.e., porosity n<0.01) in order to simulate an impermeable surface, as shown in the right panel of Figure 2. This panel shows that the moving domain can be partly immersed in the porous media, completely matching the overset requirement without affecting the wave generation processes. Therefore, the sea bottom is not modeled as a solid boundary, instead it is just a part of the background domain in which a different set of equations (i.e., the VARANS equations valid for porous media flow) are solved. This approach allows the moving domain, which contains the landslide, to move through the background one and, consequently, the body to move by touching the sloping coast. Obviously, as discussed later, a preliminary tuning of the numerical parameters, which characterizes the porous media flow, is necessary to represent the sloping coast as close to an impermeable surface as possible. To summarize, this approach is expected to be a powerful tool to model the phenomenon at hand (i.e., tsunamis generated by rigid and impermeable landslides), since, conversely to other techniques and implementations (e.g., cutting cells and immersed boundary method), it allows to resolve the boundary layer around complex 3-D geometries. Validation Against Experimental Data In order to validate the proposed approach and, consequently, to safely use the tool itself for the following parametric simulations, we reproduced numerically the experiments of Liu et al. (2005), valid for landslide-generated tsunamis triggered by a rigid and impermeable wedge at a sloping coast. The validation procedure is shown in this section. Description of the Validation Case A brief description of the experiments carried out by Liu et al. (2005) is given here, while the reader is referred to the original paper for more details. The large-scale experiments have been carried out in a 104.0-m long, 3.7-m wide, and 4.6-m deep wave tank by following Froude similarity laws. A plane slope, having an angle of inclination θ with the horizontal (tanθ ¼ 1/2), was located near one end of the tank and a dissipating beach at the other end. For all experiments, the water depth in the wave tank was about 2.44 m. Liu et al. (2005) used several geometries (a wedge and a hemisphere) to represent the landslide; in this paper only the wedge has been considered. The wedge-shaped slide has the following dimensions: length b = 0.9144 m, front face height a = 0.4572 m, and width w = 0.6525 m. Different initial slide positions have been used during their experiments, ranging from subaerial to submerged. Nevertheless, it is worth noticing that in the present paper only submerged landslides have been modeled. The slides moved down the slope by gravity, rolling on wheels. Figure 3 shows a definition sketch of the experimental setup as well as the nomenclature of the parameters used by Liu et al. (2005). The vertical distance between the still water level and the landslide's upper face is d, following the nomenclature shown in the upper right panel of Figure 3. The spatial coordinate x are measured as the seaward distance starting from the intersection of the SWL with the slope. The runup and the free-surface elevation time series have been recorded with wave gauges. The free-surface elevation time series, measured along the centerline of the landslide by three wave gauges (WG1, WG2, and WG3), placed at x WG1 = 1.796 m, x WG2 = 2.180 m, and x WG3 = 2.564 m (see Figure 3) in a test with submerged landslide (d/b = −0.33), have been used as the validation case for our numerical approach. Landslide Motion To date, the numerical reproduction of the landslide kinematics, although a simple geometry and a rigid and impermeable landslide is considered, is not an easy task. The physical phenomena governing both the triggering and the evolution mechanisms of a landslide (submerged or subaerial) are far from being included in most of the numerical hydrodynamics codes, although some remarkable progresses have recently been achieved (e.g., Clous & Abadie, 2019;Shi et al., 2016;Si et al., 2018). Additionally, the governing equation of landslide motion has been widely used in past studies (e.g., Romano et al., 2016Romano et al., , 2017, and, at least in the case of submerged landslides, analytical solutions are available (e.g., Pelinovsky & Poplavsky, 1996;Watts, 1998). Therefore, in this paper, we used the analytical solution provided by Pelinovsky and Poplavsky (1996), and later by Watts (1998), to reproduce the movement of the landslide, which has also been applied by Liu et al. (2005) to validate their large-eddy-simulations. The equation of motion of a sliding rigid body reads as follows: where m is the landslide mass, s the landslide displacement along the slope, t the elapsed time, g the gravitational acceleration, θ the incline slope angle, C n the Coulomb friction coefficient, C m the added mass coefficient, m 0 the displaced water mass, A the main cross section of the moving landslide (i.e., perpendicular to the direction of motion), ρ the water density, and C d the global drag coefficient. In the case of submerged landslides the analytical solution of Equation 10, provided by Watts (1998), is where a 0 is the initial acceleration and u t is the terminal velocity that can be easily calculated as once the hydrodynamics coefficients have been estimated. The left panel of Figure 4 shows the comparison between the analytical solution, used in this paper, and the experimental landslide motion from one of the experiments of Liu et al. (2005) that has been used for the validation of the new approach. In Figure 4 the red line refers to the analytical solution, while the black markers refer to the experimental data. Furthermore, the right panel of the figure shows the velocity of the body as obtained by Equation 10. Numerical Setup A numerical wave tank has been designed in order to reproduce the experiments of Liu et al. (2005). The numerical domain and the related boundary conditions are presented in Figure 3. According to the overset mesh method, a background domain and a moving one, containing the impermeable body (i.e., the landslide), have been defined. The background domain is 6.5 m long, 3.7 m wide and 4.0 m high, while the moving one is 1.3 m long, 1.1 m wide and 0.9 m high. A preliminary grid refinement study has been carried out, following the approach described in Devolder et al. (2017). Three different mesh configurations, called coarse (C), medium (M), and fine (F), respectively, have been used for this purpose. The root mean-square error (RMSE) of free-surface elevation time series, measured at WG3, has been used to quantify the discrepancies between experimental and numerical results. A summary of the mesh resolution, number of computational cells, computational time (CT), and RMSE, related to each preliminary case, is provided in Table 1. The RMSEs related to the preliminary cases clearly show that the numerical results converge monotonically towards the experimental one. The mesh labeled Fine allows to obtain an RMSE = 0.0013 m, which is very small both in absolute and in relative (one order of magnitude smaller than the ones obtained with the two coarser meshes) value. On the other hand, this mesh is obviously very computationally expensive (more than 85 M of cells). In order to balance numerical model accuracy and computational cost, a new mesh, named Medium-Fine (M-F), has been created. The characteristics of this trade-off mesh are reported in Table 1. It can be seen that the RMSE for the Fine (RMSE = 0.0013 m) and Medium-Fine (RMSE = 0.0031 m) mesh configurations is in the same order of magnitude, testifying that a further refinement would only increase the CTs without leading to significant improvements of the results. Thus, the chosen mesh (Medium-Fine) for the background domain is characterized by a cell resolution of 0.025 m along the x and y directions and 0.014 m along the z direction. In the moving domain the mesh is characterized by the same cell resolution as the background one, but the body-fitted approach ensures a much more detailed mesh resolution around the object (i.e., cell size in the order of few mm), which is placed in the center of the moving domain. The object has been modeled as a rigid and impermeable body. The geometrical dimensions of the landslide body are exactly the same of that used by Liu et al. (2005). The water depth h has been fixed to 2.44 m. An active absorption boundary condition (Higuera et al., 2013a) has been applied at the right side on the numerical wave tank, while along the solid impermeable boundaries (lateral walls, left side, roof, and bottom) a no-slip velocity condition has been imposed. Furthermore, the plane slope (i.e., the sloping coast) along which the body slides has been modeled as a porous media characterized by a very low porosity (i.e., hydraulic conductivity K<9.0·10 −10 m/s). The moving domain (with the body contained in it) moves through the background domain according to Equation 11 as shown on the left panel of Figure 4. Comparison with Experimental Data and Discussion of the Results This validation focuses on the near field only. Thus, the numerical simulation has been stopped after 1.6 s from the beginning of the landslide's motion. Indeed, in this time window, the near-field wave features (i.e., first wave trough and first wave crest), evaluated by means of three free-surface elevation time series, are completely developed. Figure 5 shows the comparison between numerical (red lines) and experimental (black dots) results for the three selected wave gauges. The three panels show the high degree of agreement between the two sets of data , it is evident that the numerical model is able to carefully reproduce the physics of such a complex physical phenomenon. As the submerged landslide starts to move, a small wave trough develops. A similar behavior is shown in the middle panel (η WG2 ). Looking at the lower panel (η WG3 ), the free-surface elevation time series firstly exhibits a wave trough followed by a wave crest, jointly induced by the rebound of the first wave trough and by the piston-like mechanism, which is a peculiar feature of the waves generated by submerged landslides. Moreover, Figure 6 shows a contour plot of the free-surface elevation, evaluated at four different time instants from the landslide release. Figure 6 enhances the good degree of symmetry of the numerical results throughout the simulation. In order to validate the numerical approach only the available experimental measurements (i.e., free-surface elevation time series) have been used, but it is clear that the great advantage of using a full-3-D CFD tool, based on the Navier-Stokes equations, lies in having the full 3-D description (i.e., free-surface elevation, 10.1029/2020JC016157 Journal of Geophysical Research: Oceans velocity, and pressure field) of the phenomenon in the whole numerical domain, as shown in Figure 7. In this figure the velocity field, induced by the movement of the landslide at six selected time instants, is presented. Each row in Figure 7 refers to a different time instant, while in each column a given quantity, at the considered time step, is presented, namely, the first column shows the overset domain that, containing the landslide and embedded in the porous media, travels through the background one; the second and the third columns represent the contour plot and the vectorial diagram of the velocity field, respectively. Figure 7 shows the great potential of the numerical tool to describe the complex flow induced by landslide motion. Indeed, the velocity field accurately depicts the tsunami generation process. As the landslide starts to move the volume of water, placed above the moving body, points downward triggering the formation of the characteristic wave trough, while in front of the moving body the velocity field exhibits a horizontal component; thus, the landslide, pushing the water in front of it, triggers the piston-like mechanism that generates the leading wave crest that propagates seaward. Furthermore, a vortex structure is generated at the very tip of the moving body. As the time increases, this whirling structure develops moving upward and detaching from the moving lanslide that continues its descent travel. Furthermore, given that the sloping coast is modeled as a porous media characterized by a low permeability, in which the VARANS equations are solved, it is important to confirm that the flow velocity within the porous media is as small as possible (i.e., nearly zero), that the velocity profiles at the interface surface of the porous media itself are conveniently smooth, and that the mass is conserved throughout the simulation. In the lower panel of Figure 8, the contour plots of the velocity magnitude inside (grayscale colorbar) and outside (red and blue colorbar) the porous media, at the time instant t = 0.75 s, are presented. It can clearly be seen that the flow velocity within the porous media exhibits very small values, below 10 −4 m/s. Furthermore, negligible flow velocity is detectable at the interface surface of the porous media region, testifying that no incoming nor outcoming water flux is observed. The upper panel of Figure 8 presents the matching behavior of the velocity profiles at the interface surface of the porous media. In this panel the velocity profiles (blue dots) at a given time (t = 0.75 s), measured at four virtual gauges (VG 1 , VG 2 , VG 3 , and VG 4 , thin black lines), deployed on the symmetry plane (y = 1.85 m) and perpendicularly crossing the porous media surface (i.e., the sloping coast), are represented. Note that the presented values of the profiles do not reflect the real values of the flow velocity magnitudes (shown in the second and third columns of Figure 7 and in the lower panel of Figure 8), as they have been distorted for graphic Journal of Geophysical Research: Oceans needs to magnify the shape of the profiles themselves. As previously stated, the flow velocity is nearly zero (i.e., blue dots coincide with the thin black lines) within the porous media region, and the flow transition at the porous interface appears to be very smooth. This behavior is clearly detectable looking at VG 1 , VG 3 , and VG 4 but not for VG 2 as the landslide model is passing on the virtual gauge at the considered time step. Thus, at VG 2 the velocity magnitude is zero within the porous media and, consistently, exhibits values different from zero just above the moving body. Moreover, the mass conservation throughout the entire simulation is discussed. To this end, the mass of both phases (air and water) has been checked during the whole simulation time, resulting in a final variation of 2.46·10 −5 % and 5.42·10 −6 % with respect to the initial fraction of water and air, respectively. To the knowledge of the authors, it is the first time that mass conservation is clearly discussed and in previous studies, where porous media solvers were used (e.g., Higuera et al., 2014aHiguera et al., , 2014bJacobsen et al., 2015Jacobsen et al., , 2018, the mass was not fully conserved. To conclude, by using this approach, the CTs can slightly increase as an additional part of the domain (i.e., the porous media) is taken into account. However, this increase is not significant, as the flow velocity is almost zero within the porous region, as shown in the upper left and in the lower panels of Figure 8. Description of the Parametric Simulations As previously stated, this work aims to investigate the near-field wave characteristics of tsunamis generated by submerged landslides by exploring the effects of different landslide-triggering mechanisms (i.e., change of a 0 ). Therefore, the new numerical approach has been applied to investigate the tsunami wave features in the near field. Parametric simulations have been performed by varying the initial acceleration a 0 , Journal of Geophysical Research: Oceans aiming at investigating different landslide-triggering mechanisms beyond pure gravity driven. Similar experiments have been conducted in the past, although by using completely different approaches and dealing with different layouts, by Watts (1998) and by Romano et al. (2017). Therefore, three different sets of parametric simulations PSSj (being j = 1, 2, 3) have been carried out. For each PSSj nine different values of initial acceleration a 0 (viz., a 0 = 0.5, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.0, and 8.0 m/s 2 ) have been used, keeping constant few selected parameters (i.e., landslide dimensions b, a, and w, initial position of the landslide, d/b ratio, and water depth h). Therefore, the nine values of a 0 , accordingly with the other hydrodynamics parameters previously calculated, have been used to obtain, via Equations 11 and 12, the relative landslide motions in terms of s(t) and u(t), respectively. The nine values of initial acceleration have been arbitrarily chosen, by following the precise purpose of extending the range of Hammack number (H a0 ), defined as previously explored by experimental studies (e.g., Enet & Grilli, 2007;Romano et al., 2017;Watts, 1998). In particular, to explore those values of H a0 that hardly can be (safely) obtained in laboratory experiments (i.e., H a0 < 3.0 and H a0 > 5.0). Hence, a wide range of landslide-triggering mechanisms, dynamics, and rheology (i.e., rock slide, earth slide, debris flow, liquefaction, etc.) is investigated, although a direct link between a 0 and the landslide type is not straightforward to obtain. Indeed, previous studies related to tsunamis generated by submerged landslides (mainly gravity driven) show that a 0 exhibits a wide range of values: for example, a 0 up to 0.06 m/s 2 (Enet & Grilli, 2007), and 1.50 m/s 2 (Sue et al., 2011). Moreover, it is well documented that other phenomena (ground liquefaction, thermal pressurization, etc.) can change the landslide rheology and dynamics resulting in landslides with a higher mobility than that provided by a simple gravity-driven instability. For instance, during the Vajont event it is estimated that an acceleration of 2.60 m/s 2 has been reached due to the thermal pressurization (Veveakis et al., 2007). In light of the above, the general idea of varying a 0 is to investigate the possible effects, in terms of the generated wave characteristics, of a wide range of landslide-triggering mechanisms. The a 0 values have been chosen arbitrarily, but with the aim of pursuing two main purposes: (a) to investigate the effects of these rapidly evolving landslide phenomena; (b) to enucleate the asymptotic physical features of landslide-generated tsunamis by performing a parametric analysis of a 0 . The nine landslide motion curves used for each set of parametric simulations are shown in Figure 9 with black lines. The left panel of this figure reports the time series of the landslide displacement, while the right one shows the time series of the landslide velocity. By varying the initial acceleration, the displacement of the landslide, at a given time instant, will change among the nine curves. Therefore, in order to compare similar conditions, the motion of the landslide has been stopped once the body has traveled 2.9 m. This condition is matched at different times depending on a 0 . The red dashed lines refer to the kinematic characteristics of the landslide, namely, time series of displacement (left panel) and velocity (right panel) as from the experiment of Liu et al. (2005). A few parameters have been changed, among different sets, to investigate a wide range of conditions, as shown in Table 2. Analysis of the Near Field: Results and Discussion This section discusses the near-field wave characteristics. In order to postprocess the numerical results, several virtual wave gauges have been deployed into the domain as shown in the left panel of Figure 10. In this panel the numerical wave gauges (which refer to PSS3) are identified by blue crosses, while the initial shoreline and the initial landslide positions are identified by blue dashed and red continuous lines, respectively. The adopted sensor arrays, partially resembling the sensor layout described in Romano et al. (2013) and later in Bellotti and Romano (2017), consist of concentric circles of wave gauges, having the center in the barycentric position of the landslide (in its initial position). Therefore, all the free-surface elevation time series from virtual wave gauges have been analyzed to provide a detailed description of the wave features in the near field. The postprocessing analysis is divided as follows: • Time domain analysis of the free-surface elevation time series; • Spatial analysis of the wave characteristics; • Analysis of the synthetic results (i.e., wave crests and troughs) and comparison with previous studies. 10.1029/2020JC016157 Time Domain Analysis of the Free-Surface Elevation Time Series In this section only the results of five representative virtual wave gauges are represented for each set of simulations, aiming at showing the effects induced by the variation of the initial acceleration on the generated waves. These five wave gauges (G1, G2, G3, G4, and G5) are represented as blue circles in the right panel of the Figure 10. The results are shown in Figure 11. Each row of this figure reports the results of a simulation set (i.e., PSS1, PSS2, and PSS3), while each column refers to the free-surface elevation time series measured by one of the five wave gauges shown on the right panel of Figure 10 (i.e., the first column refers to the results measured by G1, while the fifth column refers to the ones measured by G5). Each panel contains nine free-surface elevation time series, represented with grayscale (light gray refers to small values of a 0 , while dark gray to large ones). Figure 11 fully reflects the features of the tsunami generation process. The first column shows that the impulsive phenomenon starts with a free-surface depression, while the second column (i.e., wave gauge placed in the barycentric position of the landslide initial position) shows that a wave crest follows the first large wave trough, as a consequence of the typical rebound of the free surface. The following columns show that the tsunami signal starts with a small wave crest, produced by the landslide piston-like mechanism, followed by a large wave trough and a second wave crest, generally larger than the first one. Furthermore, Figure 11 highlights the influence of a 0 on the generated wave signals. As far as different values of a 0 are concerned, keeping fixed the other geometrical parameters, the characteristics of the tsunamis change dramatically. The magnitude of the wave characteristics (minimum troughs and maximum crests) increases with a 0 . Additionally, the rising time of the first wave trough decreases, and in general, the wave signals exhibit a narrower and sharper shape (i.e., decreasing wave periods). These preliminary results confirm the experimental findings of Romano et al. (2017), although a different configuration is considered here. A detailed spatial analysis of the wave characteristics is provided in the following sections. Spatial Analysis of the Wave Characteristics A standard time-domain analysis (i.e., wave-by-wave analysis) has been applied to extract the wave characteristics from each free-surface elevation time series measured by the N virtual wave gauges shown on the left panel of Figure 10. The wave characteristics of interest are Journal of Geophysical Research: Oceans • η max−i c : maximum wave crest detected in the i th (i = 1,2,…,N ) time series (i.e., spatial envelope of the maximum wave crests). • η min−i t : minimum wave trough detected in the i th (i = 1,2,…,N ) time series (i.e., spatial envelope of the minimum wave troughs). • η max c : maximum wave crest (i.e., maximum value among the η max−i c ). • η min t : minimum wave trough (i.e., minimum value among the η min−i t ). • η max : absolute value of the maximum free-surface oscillation detected at the N virtual wave gauges, defined as η max ¼ max jη max c j; jη min t j À Á . • T η max : wave period related to the wave containing η max . • c 1st t : mean wave celerity of the first wave trough (measured along the landslide path). • η 0 : absolute value of the maximum free-surface oscillation measured at the wave gauges having coordinates (x 0 ,y 0 ), that is, the barycentric position of the landslide at its initial position, defined as η 0 ¼ max jηðx 0 ; y 0 ; tÞj ð Þ . Figure 12 shows that the numerical results exhibits a very good spatial symmetry around the symmetry plane at y 0 = 1.85 m, even considering that the represented quantities are not synchronous. Additionally, both panels emphasize the lack of transversal modes; this is expected as the numerical wave tank is not narrow, especially if compared with the landslide width. Therefore, to simplify the comparison of the wave characteristics in the same plot, as well as to minimize the number of figures, the spatial representation of η max−i c and η min−i t has been performed by splitting each contour plot into two parts as follows: above the symmetry plane the spatial layout of η max−i c is represented, while below the symmetry plane the spatial layout of η min−i t is plotted. The nine panels of Figure 13 clearly show that PSS1 exhibits the smallest wave characteristics of the three considered sets. The large d/b ratio (d/b = −0.33), together with the small values of the landslide initial accelerations a 0 (first row of the figure), results in small tsunamis, typically characterized by significantly larger troughs than crests. Nevertheless, as the initial acceleration increases (second and third rows of the figure), the generated tsunamis, and accordingly the wave characteristics, increase as well. Minimum wave troughs are always larger (up to two times) than the maximum wave crests, as expected from submerged landslide tsunamis. Furthermore, from the fifth initial acceleration (a 0 = 3.0 m/s 2 ), the minimum wave trough starts to have significant values shoreward from the initial landslide position, that is, close to the shoreline. Finally, it can be seen that from the sixth initial acceleration (a 0 = 4.0 m/s 2 ), some reflection phenomena, between the wave trough and the numerical wave tank walls, occur. These phenomena, detected throughout all the simulation sets for increasing a 0 , are within the time window used for the presented analysis, limited to the portion of domain adjacent to the tank walls and do not contaminate the near-field wave characteristics. Figures 14 and 15 for PSS2 and PSS3, respectively. In these cases, the contour plots show that larger tsunamis, if compared with those of PSS1, characterize these two sets. This is expected, as d/b is smaller than that of PSS1 (d/b = −0.16 for PSS2 and d/b = −0.13 for PSS3). The first initial acceleration a 0 = 0.5 m/s 2 produces small tsunamis. Starting from the second value of a 0 the tsunami crests and troughs exhibit larger values throughout the domain. No significant differences can be detected among the two simulation sets. Furthermore, the figures show some wave reflection, between the wave trough and the numerical wave tank walls, without contaminating the near-field wave features. The spatial analysis of the parametric simulations, shown in Figures 13, 14, and 15, confirms that the influence of the initial acceleration a 0 on the tsunami generation mechanisms and on the near-field wave features is significant. descreases. This rate of increase (and decrease) appears to be less than linear for all the considered sets. Furthermore, as previously argued, the minima wave troughs are always larger (in modulus) than the maxima wave crests, up to two times. Both η max c and η min t seem to approach an asymptotic value for a 0 > 4.0 m/s 2 . This behavior suggests that, for fixed sets of governing parameters, even by increasing a 0 the maxima wave crests (and minima wave troughs) are not further increasing (or decreasing). A saturation effect on the tsunami generation mechanism is observed. This effect is confirmed by previous studies. Indeed, Tinti and Bortolucci (2000) pointed out a similar behavior by introducing the Froude number of the landslide to quantify the saturation. Nevertheless, they have evaluated mainly the landslide velocity and the duration of the movement, while here with a 0 a different descriptor of the landslide kinematics has been investigated. In addition, similar saturation has also been observed experimentally for edge waves, induced by subaerial landslides Heller & Spinneken, 2015), which are known to play a crucial role in the tsunami alongshore propagation and interaction with the coast (Romano et al., 2013). Similar considerations arise from In the right panel of Figure 16 η max is plotted as a function of a 0 , where red markers refer to PSS1, while blue and black markers refer to PSS2 and PSS3, respectively. Consistently to what was discussed for η max c and η min t , the smallest values of η max refer to PSS1, while larger values characterize PSS2 and PSS3. The behavior of η max , as a function of a 0 , clearly resembles the one observed in the left panel of Figure 16. As a 0 increases, η max increases as well, with a rate of increase less than linear for all the considered sets, approaching to asymptotic values. Thus, the saturation effect is noticeable also for η max . Indeed, also this important parameter exhibits a decrease of the growth rate for increasing values of a 0 . Moreover, η max never occurs in The left panel of Figure 17 represents the wave period T η max as a function of a 0 with the common notation. As previously outlined, T η max has been obtained by carrying out a zero-crossing analysis on the apparent wave (i.e., the portion of the free-surface elevation time series) that contains η max . The left panel of Figure 17 shows that T η max decreases as a 0 increases. This aspect confirms what has been proposed based on Figure 11. The wave periods exhibit an average value of about 2.0 s for the smallest a 0 , while for increasing a 0 , the values of T η max decrease less than linearly, reaching on average an asymptotic value of about 1.0 s. Therefore, a saturation mechanism can also be noticed for T η max , confirming that, for increasing a 0 , no more energy can be effectively transferred to the water to generate larger waves. Finally, small differences in T η max are detected among the three sets of simulations, among which only d/b changes. Nevertheless, it should be noticed that the largest wave periods pertain to PSS1. The right panel of Figure 17 shows the mean celerity of the first wave trough c 1st t with the common notation and the mean velocity of the landslide body u l (red triangles) as a function of the initial acceleration a 0 . Note that the mean celerities of the first wave trough have been calculated by using the free-surface elevation time series measured by those wave gauges placed parallel to the landslide path, starting from the barycentric position of the landslide. This panel shows that also the mean celerities of the first wave trough saturate, similar to the wave periods and wave crests/troughs. For the smaller values of a 0 , c 1st t is in the order of 0.8 m/s (for all the tested configurations). As a 0 increases, the mean celerities of the first wave trough increase less than linearly, reaching a maximum value of about 1.6 m/s. Negligible differences in c 1st t can be detected among the three sets of simulations. c 1st t for the four smaller values of a 0 is on average identical to u l . Starting from the fifth value of a 0 these two parameters start to diverge, as u l is always larger than c 1st t . In other words, u l starts to be larger than c 1st t approximately as the saturation region begins. Another parameter of interest, widely used in the scientific literature (e.g., Enet & Grilli, 2007;Romano et al., 2017;Watts, 1998) to describe the wave features of the tsunamis generated by submerged landslides, is η 0 . This parameter is crucial to describe the tsunami wave properties in the near field as it represents the maximum free-surface oscillation measured at the barycentric position of the landslide. In Figure 18 the values of η 0 , related to each set of parametric simulations, are reported as a function of a 0 (circle markers). The Figure 18, in analogy to Figure 16 for η max c and η min t , and Figure 17 for T η max and c 1st t , confirms the general behavior of the wave characteristics described so far. Indeed, Figure 18 shows that η 0 , for all the investigated simulation sets, increases as a 0 increases, exhibiting a less than linear growth that approaches asymptotic values for increasing values of a 0 (i.e., saturation), resembling the behavior of η max (right panel of Figure 16). The spatial analysis shown so far confirms and extends the findings of previous works on tsunamis generated by submarine landslides (Enet & Grilli, 2007;Romano et al., 2017;Tinti & Bortolucci, 2000;Watts, 1998). Therefore, in order to further improve the understanding of the effect of a 0 on the tsunami wave properties in the near field, a comparison between present results and results from previous studies is presented in the following section. Analysis of the Synthetic Results In this section the analysis of the synthetic results is presented. The wave characteristics, obtained from the numerical results and discussed in the previous section, have been analyzed and presented in the form of a 10.1029/2020JC016157 Journal of Geophysical Research: Oceans so-called "nondimensional wavemaker curve," as carried out by Watts (1998) for 2-D experimental data obtained for a wedge-shaped body sliding on a 45°incline (see Figure 6 of Watts, 1998). The wavemaker curve has been created by representing the maximum nondimensional wave amplitude η 0 a 0 =u 2 t as a function of the Hammack number H a0 . The dimensionless wave amplitudes, calculated from the numerical simulation results, are reported in the left panel of Figure 19 with empty black circles. The range 0.3 ≤ H a0 ≤ 7.5 has been investigated. The arrangement of the numerical data exhibits a power law decay that closely resembles the behavior identified by Watts (1998) on the basis of laboratory tests. Further experimental data have been analyzed in this form and reported on the right panel of Figure 19. Firstly, the 2-D experimental results obtained by Watts (1998) are plotted as empty blue triangles, together with the fitting curve represented as a continuous blue line, within the experimental range 3.0 ≤ H a0 ≤ 4.5, and as a dashed blue line, outside the experimental range. It can be seen that the present numerical results lie at a different position with respect to those by Watts (1998), although the general arrangement of the two data sets is very similar. The data obtained by Watts (1998) are 2-D. Therefore, in order to directly compare the 2-D data set with the 3-D one, a correction formula has been applied to the 2-D data and to the relative fitting curve. The correction formula, provided by Watts et al. (2005), reads as follows: where w is the landslide width and λ 0 ¼ u t ffiffiffiffiffi gd p =a 0 , although it is worth noticing that Heller and Spinneken (2015) pointed out that Equation 16 may give rise to small conversion factors, at least if subaerial slides are considered. The corrected data are presented in the right panel of Figure 19 as empty gray triangles, while the fitting curve is shown in the same plot as a continuous red line, within the experimental range of H a0 , and as a dashed red line, outside the experimental range. Within the Hammack number range investigated by Watts (1998), the numerical and experimental data are in a very good agreement (RMSE = 0.5 ·10 −4 ), while outside the considered range the numerical results slightly deviate from the fitting curve for H a0 < 3.0 and still remain in good agreement with the fitting curve for H a0 > 4.5. The data described so far (both numerical and experimental), and presented on the right panel of Figure 19, refer to similar geometries of the problem, that is, wedge-shaped landslides sliding on plane slopes, although Watts (1998), empty blue and gray triangles respectively; 2-D-and 3-D-corrected best fitting curves obtained by Watts (1998), blue and red lines, respectively; experimental data of Enet and Grilli (2007), empty gray squares; 3-D experimental data of Romano et al. (2017), empty gray diamonds (right panel). Note that dashed lines refer to the fitting curves evaluated outside the experimental range. 10.1029/2020JC016157 Journal of Geophysical Research: Oceans having different inclination angles, namely, 45° (Watts, 1998) and 26.56°(present study). It is interesting to extend the current representation to completely different geometries and landslide shapes. The two 3-D data sets from Enet and Grilli (2007) and Romano et al. (2017) are therefore introduced. Specifically, the experiments carried out by Enet and Grilli (2007) refer to a smooth Gaussian-shaped landslide body sliding on a 15°inclined plane slope, while the experiments carried out by Romano et al. (2017) refer to a semi-ellipsoidic-shaped landslide body sliding along a 18°inclined flank of a conical island. These data have been added on the right panel of Figure 19 as empty gray squares (Enet & Grilli, 2007) and empty gray diamonds , respectively. These additional data are in good agreement with the previously discussed ones, exhibiting the characteristic power law decay. The Hammack number experimental range, related to these extra sources of data, spans in the interval 1.9 ≤ H a0 ≤ 7.5, thus wider than that explored by Watts (1998) but narrower than that investigated in the present study. Therefore, the considered sources of data, obtained by using different techniques, geometries, and configurations, form an extended data set spanning a wider range of H a0 (i.e., 0.3 ≤ H a0 ≤ 7.5). Moreover, looking at the right panel of Figure 19, it can be seen that the fitting law proposed by Watts (1998) is obviously not adequate to predict the nondimensional wave amplitude out of the tested experimental range, in particular if small values of the Hammack number (i.e., H a0 < 3.0) are considered. In light of the above, this extended data set has been used to obtain a new fitting function in the form of a power law, following the approach by Watts (1998), as follows: where e α ¼ 0.04263 and e β ¼ − 1.596. The proposed fitting curve, together with the extended database, is presented in Figure 20 with the blue dashed line and black empty circles, respectively. Figure 20 shows that the proposed fitting law is able to describe the arrangement of the fitted data with an excellent accuracy. This is also confirmed by the large coefficient of determination (R 2 = 0.9967). Indeed, all the points that form the database exhibit a very small scatter from the new fitting line. Furthermore, in the same plot, the fitting law proposed by Watts (1998) is represented with a red dashed line. The nondimensional wavemaker curve obtained by Watts (1998) appears to be very accurate in describing the experimental and numerical results for H a0 ≥ 3.0, even for values of the Hammack number larger than that explored by Watts (1998). While for smaller values of H a0 the fitting law obtained by Watts (1998) is not able to accurately predict the nondimensional tsunami wave amplitude. On the contrary, the proposed fitting curve obtained with an extended database is more effective in predicting the nondimensional wave amplitude in the near field, especially for large initial accelerations (H a0 < 3.0). Concluding Remarks and Ongoing Research In this paper a detailed numerical analysis of the near-field wave characteristics of tsunamis generated by submerged landslides has been presented. A new method for numerically modeling tsunamis generated by rigid and impermeable submerged landslides with OpenFOAM® has been presented and validated. The proposed method consists in coupling the overset mesh technique, which is a new and promising method in the coastal engineering field, with the well-known porous media approach currently implemented in IHFOAM. This coupling allows to overcome a restriction of the overset mesh method that does not allow the modeling of a rigid body moving in touch with an impermeable surface. The excellent agreement between the numerical results and the experimental data of Liu et al. (2005) highlights the ability of the proposed approach in reproducing such a complex phenomenon. The new (present work) and the previous (Watts, 1998) best fitting curves are represented by blue and red dashed lines, respectively. The numerical method, validated against experimental data, has been applied to perform an extensive set of new parametric simulations aimed at exploring the influence of the landslide-triggering mechanisms on the generated wave characteristics. The same configuration as for the validation has been used. Three sets of parametric simulations, each one characterized by a different value of d/b (i.e., the ratio of the vertical distance between the still water level and the landslide's upper face d to the length of the landslide b, d/b = −0.33, −0.16, −0.13), have been carried out by varying the initial acceleration a 0 , in order to explore different landslide-triggering mechanisms. The numerical results allowed to perform a quantitative and detailed analysis of the tsunami properties in the near field. The new numerical results, which are in good agreement with previous experimental studies (e.g., Romano et al., 2017), have shown that, in general, the wave characteristics in the near field vary as a function of a 0 . In particular, for increasing values of a 0 , the wave crests and troughs tend to increase, the wave periods tend to decrease, and the mean celerities of the first wave troughs tend to increase. All the mentioned quantities suffer a saturation mechanism for increasing values of a 0 (i.e., no more energy can be effectively transferred from landslide to water to generate larger waves), confirming the findings of Tinti and Bortolucci (2000). Furthermore, the numerical data have been represented, together with previous experimental ones obtained for different geometries and configurations, in the form of a "nondimensional wavemaker curve," as proposed by Watts (1998). The very good agreement among these different sources of data, together with the extended data set provided by the new numerical simulations that consider a wider range of the governing parameters (i.e., 0.3 ≤ H a0 ≤ 7.5), allowed to obtain a new fitting curve for predicting the wave characteristics in the near field, induced by rigid and impermeable submerged landslides, as a function of the Hammack number H a0 . In the present work only tsunamis generated by rigid and impermeable submerged landslides have been considered. Although this represents an approximation of the real submerged landslide behavior, it is well demonstrated in the scientific literature (Grilli et al., 2009) that the landslide deformation does not play a significant role on submarine landslide tsunami features in the slide early time kinematics, which at short time scales is mainly governed by the initial acceleration. Nevertheless, as far as ongoing research is concerned, the effects of the landslide deformation and porosity have to be introduced and modeled, especially if subaerial landslides are considered.
2020-06-11T09:03:55.864Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "d19bcc10c15732d378945b5c412e6488d98600a4", "oa_license": "CCBY", "oa_url": "https://repositorio.unican.es/xmlui/bitstream/10902/19911/1/TsunamisGeneratedBySubmerged.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "62cf793ae83e920979be34ccee891f341a27cbbf", "s2fieldsofstudy": [ "Environmental Science", "Geology", "Physics" ], "extfieldsofstudy": [ "Geology" ] }
232117996
pes2o/s2orc
v3-fos-license
Development and validation of a simple-to-use nomogram to predict liver metastasis in patients with pancreatic neuroendocrine neoplasms: a large cohort study Background Liver metastasis is an important prognostic factor for pancreatic neuroendocrine neoplasms (pNENs), but the relationship between the clinical features of patients with pNEN and liver metastasis remains undetermined. The aim of this study was to establish and validate an easy-to-use nomogram to predict liver-metastasis in patients with pNEN. Methods We obtained the clinicopathologic data of 2960 patients with pancreatic neuroendocrine neoplasms from the Surveillance, Epidemiology and End Results (SEER) database between 2010 and 2016. Univariate and multivariate logistic regression were done to screen out independent influencing factors to establish the nomogram. The calibration plots and the area under the receiver operating characteristic curve (AUC) were used to evaluate the performance of nomogram. Decision curve analysis (DCA) was applied to compare the novel model with the conventional predictive methods. Results A total of 2960 patients with pancreatic neuroendocrine neoplasms were included in the study. Among these, 1974 patients were assigned to the training group and 986 patients to the validation group. Multivariate logistic regression identified, tumor size, grade, other site metastasis, T stage and N stage as independent risk factors. The calibration plot showed good discriminative ability in the training and validation groups, with C-indexes of 0.850 for the training cohort and 0.846 for the validation cohort. The AUC values were 0.850 (95% CI 0.830–0.869) and 0.839 (95% CI 0.812–0.866), respectively. The nomogram total points (NTP) had the potential to stratify patients into low risk, medium risk and high risk (P < 0.001). Finally, comparing the nomogram with traditional prediction methods, the DCA curve showed that the nomogram had better net benefit. Conclusions Our nomogram has a good ability to predict liver metastasis of pancreatic neuroendocrine neoplasms, and it can guide clinicians to provide suitable prevention and treatment measures for patients with medium- and high-risk liver metastasis. Supplementary Information The online version contains supplementary material available at 10.1186/s12876-021-01685-w. Background Pancreatic neuroendocrine neoplasms (pNENs) are relatively rare, with an estimated annual incidence of approximately 3.65/10,000 people per year [1,2]. The natural disease progression of pancreatic neuroendocrine tumors can lead to local lymph node, liver, lung, and bone metastases. Among these, liver metastases are the most common. It is reported that more than 60% of patients with pNEN have liver metastases [3]. Studies have found that liver metastasis is an important risk factor for prognosis [4]. The treatment strategy and prognosis of pNEN largely depend on whether there is liver metastasis. Therefore, early diagnosis and treatment of pNEN patients with liver metastases can significantly improve the quality of life and prognosis. Due to the lack of typical clinical manifestations of nonfunctional pNEN in the early stage, 20% to 30% of pNEN patients have liver metastases when diagnosed, which seriously affects their quality of life and long-term survival [5,6]. Therefore, it is critical that clinicians accurately identify the risk of liver metastases in patients with pNEN for optimal treatment strategies. The routine examination for excluding liver metastasis is a computed tomography (CT), but it has low sensitivity and specificity for microscopic liver metastasis [7]. Previous studies have shown that liver metastases from neuroendocrine tumors are correlated with a variety of clinicopathological factors, including histological type, primary site, tumor size, lymphatic invasion, and proliferative activity [8,9]. However, the above studies are limited to some fragmentary risk factors and small sample sizes. It is essential to explore the relationship between clinicopathological factors and liver metastasis based on a large sample database and to develop a prediction model of the risk of liver metastasis in pNEN patients. In this study, we constructed and validated a simpleto-use nomogram model. With this prediction model, clinicians can accurately identify patients with pNEN at medium and high-risk of liver metastasis patients with pNEN and provide patients with personalized prevention and treatment strategies. Study population and data sources The data were extracted from the Surveillance, Epidemiology, and End Results (SEER) database using SEER*Stat software Version 8.3.6. Data from patients with pNEN diagnosed in 2010-2016 who had complete information including age, sex, race, primary site, grade, marital status, T stage, N stage, tumor size, histology, and metastasis site, were included in the study. Pancreatic neuroendocrine neoplasms were selected on the basis of (8240) and atypical carcinoid tumor (8249). The exclusion criteria were as follows: (1) patients without definitive liver metastasis data; (2) patients with more than one primary cancer; and (3) patients without definitive grade and metastasis site information. Construction and validation of the nomogram We randomly assigned two-thirds of our patients to the training group and the rest of them were assigned to the validation group. The chi-square tests was used to compare the baseline characteristics of the two groups. In the training group, liver metastasis risk factors were determined through the univariate logistic regression. Variates with P values less than 0.05 were used in the multivariate logistic regression analysis. Based on the coefficients of the independent risk factors in the multivariate analysis, the prediction model was visualized in the form of the nomogram. To draw this nomogram, we needed to assign a score of 0-100 to each factor. The coefficients of the above multiple logistic regression results were transformed and are shown in the form of graphs. The nomogram's ruler for each indicator was based on the index with the most influence. The greater the influence of the risk factors, the higher the nomogram score [10]. The whole process was done in R 3.6.2 software. The details of building the nomogram and R codes are provided in Additional file 1: Supplement Method 1. The concordance index (C-index), the receiver operating characteristic curve (ROC), and the area under the curve (AUC) were used to evaluate the predictive accuracy and discrimination of the nomogram. The decision curve (DCA) [11] was used to evaluate the clinical utility of the nomogram, and compare nomogram with conventional predictive risk factors including grade, T stage, and tumor size. The details of DCA curve building and R codes were provided in Additional file 1: Supplement Method 2. Risk group stratification and statistical analysis According to the characteristics of each patient's risk factors, a straight line was drawn to the "point" at the top of the model to obtain each factor score. The total score was obtained by summing the scores for all the factors. To further discriminate the risk groups of liver metastasis, the patients were categorized into low-, medium-and high-risk groups based on the nomogram total points (NTP) of every pNEN patients. The optimal two cut-off values for NTP were calculated by X-tile software. The cut-off value was then validated in the validation group. The chi-square test was used to compare all risk groups. Statistical analysis was performed using SPSS software version 23 and R version 3.6.2 software. For all analyses, P values less than 0.05 were considered statistically significant. Baseline characteristics of the patients There were 2960 eligible patients with pNEN who were included in this study. A total of 1974 patients were allocated to the training group and 986 cases were allocated to the validation group. The two groups had no significant difference in baseline characteristics (all P > 0.05) ( Table 1). In the entire study group, the median age was 58 years. The majority of the patients were white (n = 2268, 76.6%) and married (n = 1814, 61.3%). The pancreatic tail was the most common site of pNEN tumors (n = 1058, 35.7%). The main pathological grade of neoplasms was G1 (n = 2068, 69.9%), followed by G2 (n = 577, 19.5%). During the whole follow-up, most of the patients were alive (81.9%) and only 535 (18.1%) patients died. There were 419 (21.2%) and 222 (22.5%) pNEN patients with liver metastases in the training group and validation group, respectively. Liver metastasis was found to be correlated with sex, primary site, grade, T stage, N stage, tumor size and other site metastasis in pNEN patients ( Table 2). Independent risk factors and nomogram construction Univariate regression analysis was used to screen the risk factors for liver metastasis. The significant risk variables were included in the multivariate regression analysis. The results of multivariate logistic regression analysis showed that grade, T stage, N stage, tumor size, and other site metastasis were independent risk factors for liver metastasis (Table 3). All the above variables were used to establish the nomogram model ( Fig. 1). In this model, it was found that grade, T stage and tumor size had the greatest impact on liver metastasis, followed by N stage and other site metastasis. The probability of liver metastasis in each pNEN patient can be computed by adding up the corresponding scores of all the independent risk factors. Nomogram validation and risk classification The calibration plot showed good agreement in the training and validation group ( Fig. 2A, B). The C-index of liver metastasis prediction was 0.850 and 0.846 in the training and validation group, respectively. When the ROC curves were plotted, the training group had an AUC of 0.850 (95% CI 0.830-0.869), which was verified in the validation group (AUC = 0.839, 95% CI 0.812-0.866) (Fig. 2C, D). Decision curve analysis (DCA) was done next (Fig. 3), which is a novel method that can evaluate the clinical practicality of models. The results showed that the nomogram had satisfactory net benefits among most of the threshold probabilities in both groups. Compared with conventional predictive methods, our nomogram was more exact in predicting liver metastasis. The training group was divided into three subgroups based on the two optimal NTP cut-off values. According to the X-tile calculation results, the optimal cutoff values were 105.5 and 156.0 respectively (Fig. 4A). The patients were divided into low-risk (NTP < 105.5, n = 1278 (64.7%)), medium-risk (105.5 ≤ NTP < 156.0, n = 368 (18.6%)) and high-risk subgroups (NTP ≥ 156.0, n = 328 (16.6%)). The same cut-off values were used for grouping in the validation group. Notably, the high-risk pNEN patients were more likely to have liver metastases in both groups (P < 0.05) (Fig. 4B, C). Discussion Although the natural history of many pancreatic neuroendocrine tumors is characterized by slow progression and inertia, there are still patients with metastasis during the course of the disease, especially liver metastasis. For patients with resectable pNEN with liver metastases, active surgical resection of primary and liver metastases should be the preferred treatment. Previous studies have reported that surgical resection of primary and metastatic lesions could improve quality of life and prolong survival, with a 5-year survival rate of 60-80% [12][13][14][15][16]. However, due to the limited sensitivity of the current imaging modalities, early pNEN patients with liver metastasis have a high rate of missed diagnosis, which makes the patients lose their best chance of radical surgical resection when they are diagnosed. Liver biopsy has a high diagnosis rate, but it increases the risk of distant metastasis and leads to reduced survival time [17]. Therefore, a noninvasive and simple-to-use method is required for predicting the likelihood of liver metastasis in patients with pNEN. In our study, a novel nomogram was developed for predicting the probability of liver metastasis of pNEN based on a large database. The results demonstrated that the nomogram model is significantly discriminative and thus provides an individualized prediction of the probability of liver metastasis. Our study mainly focussed on the clinical characteristics of pNEN patients with liver metastasis, and demonstrated that grade, T stage, N stage, tumor size, and other site metastasis were independent risk factors for liver metastasis. The G1-2 group had a higher percentage of pNEN patients with liver metastases (70.5%) than the other groups. This result is similar to that of Ruzzenente (81.9%) [18]. In addition, Spolverato [19] found that nonfunctional and moderate-to-poor tumors were more likely to have liver metastases. We speculate the reason that the G1-2 non-functional tumor easily neglected in the early stage due to the lack of obvious clinical symptoms, and the tumor is already in advanced stage when diagnosed. Previous studies have shown that the main cause of liver metastases is vascular invasion [20]. During hematogenous metastasis, the liver is the first filter for tumor cell invasion. In this study, we found that the size and T stage of the primary tumor were closely related to the infiltration of neuroendocrine tumor cells into the liver. The size of the tumor is directly related to the T stage. The larger the primary . The x-axis shows the threshold probabilities. The y-axis measures the net benefit, which is calculated by adding the true positives and subtracting the false positives. The horizontal solid black line: assumes no liver metastasis will happen; the solid grey line: assumes all patients will experience tumor liver metastasis. In DCA, the nomogram yielded a superior clinical net benefit compared with the conventional forecasting methods across a range of threshold probabilities tumor size, the more aggressive it is towards surrounding organs or blood vessels. This study also confirmed that the larger the tumor and the higher the T stage, the greater the probability of liver metastasis. Apart from the route of hematogenous metastasis, pancreatic neuroendocrine tumor may also metastasize to distant sites via lymphatic pathways. In our study, LN metastasis was identified as an independent risk factor in predicting liver metastasis. Positive lymph nodes are a common sign before distant metastasis, which has been demonstrated in other tumors [21,22]. In our study, 47.3% of patients with liver metastases had positive lymph nodes. Therefore, more attention should be paid to the presence of metastasis in the liver and other sites in patients with positive lymph nodes. Besides liver metastasis, there were also other distant site metastases (bone, lung, brain). In this study, more than 72.2% of pNEN patients with other site metastases also had liver metastases. This result reveals that there are probably other metastases when liver metastases are found. This finding is consistent with other studies [23][24][25]. The advice given to the patient and the choice made among treatment options are based on the assessment of the individual's prognosis and risk [26]. Nomograms are graphical representations of statistical prediction models that predict the probability of an event occurring [27]. Thus, the variables contained in the nomogram should be easy to obtain and measure. In this study, we developed a nomogram to predict live metastasis in patients with pNEN. Our nomogram model has been shown to have good discernment with high C-indexes and AUCs, in both groups. Finally, DCA curves were generated to show that the nomogram could be used to obtain a better net benefit within the derived probabilities than traditional prediction methods [26]. There are some limitations to this study. The major limitation of our study is the lack of important variables, such as surgical margin, Ki-67 and other molecular biomarkers. The Ki-67 index and surgical margin play an important role in the prognosis of pNEN [28]. Unfortunately, the absence of Ki-67 and surgical margins in the SEER database made it impossible to assess its role in predicting liver metastasis of pNEN. Second, our nomogram has been verified to have excellent prediction capabilities, but further external validation based on a large multicenter data cohort is still required. Finally, since the SEER database is a retrospective database, selection bias cannot be completely avoided. Therefore, bootstrapping with 1000 resamples was performed in this study to minimize bias. Conclusion In conclusion, we successfully created and validated a simple-to-use nomogram for predicting the probability of liver metastasis in pNEN patients. This model has good predictive power and it is easy for the clinician to use. By assessing the risk of liver metastasis, clinicians could realize individualized treatment and take necessary preventive measures to reduce the risks borne by patients and improve their quality of life and prognosis.
2021-03-05T14:11:13.541Z
2021-03-04T00:00:00.000
{ "year": 2021, "sha1": "163d9c2348a22e2e27dd2315d45c16b428f7cc4e", "oa_license": "CCBY", "oa_url": "https://bmcgastroenterol.biomedcentral.com/track/pdf/10.1186/s12876-021-01685-w", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "469afb57a5fb4cbd16a6b1904fddfead87fbac63", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
189075437
pes2o/s2orc
v3-fos-license
The sentiments of Philippines’ underground artist and technology intervention Listening to music is a significant and essential part of the daily life of many citizens, it is not just a hobby, but it reflects one’s personality. In the new age of technology, several ways of playing and producing music became a trend. Thus cultural and underground artist in the Philippines is still left behind. The objective of this study is to determine the current situation of the unsigned artist in the Philippines by means of interviewing unsigned artist, bands, and independent music practitioners. This study uses the descriptive method of study that answered the questions regarding the current situation and sentiment of the unsigned artist. The study also discussed the technological trends could be considered to address problems of the Unsigned artist based on the current situation, and the layout of the proposed portal and lastly the overview of the architectural model that shall be adopted in designing the proposed portal. Based on the result of the study, the Unsigned Artist are in need of a platform for proper distribution of music. In order to satisfy the needs, this paper also discusses the overview of the proposed portal. Introduction Music is one that makes life colorful, every living person in the world has listened to music, and some tried singing despite a wrong tune. It is evident from the past that music is a hobby for everyone and has an effect on the emotions of the individual [1]. The philosophy of Music is the study of fundamental questions that are related to the nature and value of Music. Most people do enjoy music on a daily basis thus has a listening preference that is very strong. Music is said to be a collection of distinct sounds. Musicologists and Philosophers might think that well-defined as necessary, but that is not a sufficient basis for a set of sounds to be called Music. Over the years, music was associated with voice, but it is known more about the details of music apart from the musical traditions that were written or passed on. Music may likely be one of those numerous things which you know when you experience it, but you cannot define. Music enhances a garden variety of emotions such as fear and movies excitement. Music is indeed an essential part of societies around the whole world. There are recent survey that music was even used in medical treatment such as the effect of music therapy with relaxation imagery in the management of patients undergoing transplantation of bone marrow [2] that could provide relaxation to the patients. In this recent study, the value of music cannot be underestimated, it is not just a hobby for an individual, but music also heals our soul [3]. In the United States alone, as a form of art and cultural activity, listening to music is a significant and essential part of the daily life of many citizens. The needs for music has created a billion-dollar global music industry, which encompasses music production and distribution, major record companies that includes other, other music-related activities such as concerts and shows. In the same country, the music industry was estimated to generate about 17.2 billion U.S. dollars in the year 2016. Forecasts show a growth in the coming years by 2021; it is expected that the music industry revenue just in the single country will total over 22.6 billion U.S. dollars. Because of the new rise of technology, In the last few years, digital formats saw a rise in popularity in the U.S. While on the other hand, sales of other music formats -albums, digital albums, CD and digital track sales -have been continuously declining since 2012 due to availability of the digital formats [4]. In the Philippines, over the past 90 years, we saw the birth and end of different media that affects OPM (Original Pilipino Music). From the 45-rpm singles and 33-rpm long-playing vinyl albums from the year of the 1950s to the old type such as the cassette tapes of the 1980s, and the rise of compact discs of 1990s. There is a need to contend with another seismic shift by the local recording industry and artist where the expansion of the digital media will be only the one to survive [5]. For this generation of music players and downloaders, interesting issues are still unresolved, where despite the development in the recent technology, several artists are still left behind, they are known as Unsigned Artist, where those who have great compositions and music that can contribute to the music industry was not heard. The objective of this study The objective of this study is to determine the current situation of the unsigned artist in the Philippines by means of interviewing unsigned artist, bands, and independent music practitioners. This paper will also discuss a few opinions about the situation of the Philippines in the unsigned music industry. Discussion includes an examination of literature and contextual works relevant to the practice and process of the unsigned artist as a basis for the development of Online Music Distribution Portal (OMDP). Statement of the Problem This paper will also answer the following question: (1) What are the current situation and sentiment of the unsigned artist. (2) What could technological trends be considered to address the problems of the Unsigned artist based on the current situation (3) What is the layout of the proposed portal? (4) What is the conceptual framework that shall be adopted in designing the proposed portal? The significance of the Study The result of this study will be a basis for the development of the Online Music Distribution Portal in the Philippines. While this is just an initial study regarding the proposed system, this study investigates the current situation in the country and be an eye-opener in the music industry. Music and Its Contribution The philosophy of Music is the study of fundamental questions that are related to the nature and value of Music. Most people do enjoy music on a daily basis thus has a listening preference that is very strong. Music is said to be a collection of clear sounds [6]. Musicologists and Philosophers might think that well-defined as necessary, but that is not a sufficient basis for a set of sounds to be called Music. The Music industry is one of the cornerstones of the entertainment industry along with other entertainment industries such as movies, television, and radio [7]. The music industry includes many different companies and individuals whom all share the fact that they make money from music one way or the other. The list of people who are involved in the music industry is long and apart from the artists themselves includes managers, agents, publishers, producers, distributors, retailers and those who are involved in presenting live music to name a few [8]. Since the music industry has evolved over time with the progress in technology, new ways to participate in the music industry and generating revenues are continually being created. Over the years, music was associated with voice, such as rap music. Music may likely be one of those numerous things which you know when you experience it, but you cannot actually define. Music enhances a garden variety of emotions such as fear and movies excitement. We can conclude that Music is indeed an essential part of societies around the whole world. Music advantage towards developing the economy The existence of music is felt in our daily activities. People tend to listen to music while driving, relaxing, studying and sleeping. Music tends to move people in numerous unimaginable ways. Music is one of the leading industries in this 21 st century. The International Federation of Phonography Industry discovered that the sales of the music industry were 5.8 billion dollars and the revenue for accurate performance is rapidly growing to 943 million dollars and up to 862 million dollars in 2011 [9]. In the present world, people are exploring different kinds of music that are best suitable for their taste. Apart from the rapid use of the internet, it is straightforward to spread and open a new movement in music. Asia music can get bigger by gaining more popularity all over the world. In addition to the record sales for Asian music, music can create other sources of revenue like concerts, tourism, sales of the band, etc. which would create more employment in the local market and Asia as a whole. Asides the financial effect of music on the national development, Music can also affect the mindset of people. Music is the channel for people to share their feelings with others. Asian Musicians encourages their citizens to keep trying harder in order to overcome their daily life challenges an achieve higher goals. Music also attracts attention to Asian countries and enlighten citizens in other counties that Asia is a progressing continent but not sad information for the world. Methodology The research design of this study is descriptive and experimental. It gathers information based on observation, interview, and documentation. The unsigned artist who was interviewed is those who won the battle of the band from a different province in the region. While gathering data was a challenge, the researcher utilized the internet to contact the artist to know the exact and accurate sentiments. Sources of Data The primary sources of the data are the unsigned artist who is playing in different places in the region and is known locally. Some of them are searched from social media for data gathering purposes, but the researcher personally interviews most of them in order to answer the constructed questions. Instrumentation and Data Collection Personal observation was used by the researcher while attending several band play. The researcher personally experiences the need to conduct this study due to the observed needs of the band. Another method is the Interview. This method is done by gathering information by asking a series of questions by means of oral communication. It was conducted by the researcher with several possible stakeholders to determine the insights to be able to identify the needs of the unsigned artist. A structured interview was constructed. A following constructed interview Questions asked (1) May I know necessary information about you? (2) Where do you rehearse and how? (3) What music have you released so far? (4) What is your current label situation? (5) What is your current publishing situation? (6) What social media platform do you use in publishing your music? (7) How do you gain profit from music? (8) What musical technology do you use in recording music? (9) How do you sell music? (10) What are the challenges and problems encountered? A system comparison was also made, where it is a technique that is used to gather requirements for the elicitation phase of the future project. It is the process of reviewing the existing documentation of related business processes or systems in order to import detailed pieces of information that are related and important to the current project, and therefore should be considered as project requirements for the proposed portal in the future study. To understand the proposed portal, a conceptual framework which is IPO was adopted in order to explain the development of the system for future research. Profile of the unsigned artist A total of 18 unsigned artist which includes 12 bands, 3 solo singer, and 3 one-man bands. Most of the respondents are joining the battle of the bands conducted by an alcoholic beverage in the country. Majority of the band members and solo singers are single, while all one-man band is married. Practices in Rehearsing music Majority of the unsigned artist is rehearsing in the rented band room with half of the respondents. While 22 percent of the respondents shows that they rehearse at home. Also, 11 percent responded that they are rehearsing at Friend's Home, and 6 percent in rented spaces. This shows that not all unsigned artist has studio or equipment. Music released situation As shown in Figure 1, most of the respondents have not released music. Some of them released singles, but it does not mean that they are signed. Based on the interview, that artist or unsigned artist who has released, are an artist who created the original composition and plan for personal distribution. Figure 1 The situation in releasing music It is expected that there will be no Signed artist in the unsigned community. Where only one is signed in an independent label, it does not mean that being signed in an independent label make you indeed signed. Independent distribution does not have the power to distribute music in large scale as aimed by the artist. Based on the respondents, 94% of the artist has no publication. Social media in Promotion Although the majority of the artist does not have released, it does not stop the artist to promote music such as cover songs. As shown in Figure 2, most of the Unsigned Artist post information on social media, all of the artists use Facebook as an online marketing tool for gigs and another announcement. 78 percent of the artist created a personalized video and posted it on video-sharing site such as Youtube. There are also 28% of the artist who publishes cover music and original composition in Soundcloud, and only one has their own band website. Figure 2 The platform used in marketing Source of Income Music is a business in the eye of the artist, and 94 percent of the respondents earn money thru Music Play and Gigs, while only 6 percent of the respondents earn thru CD sales and Merchandise Sales. As shown in the Figure 3, 94 % of the artist does not sell music because they do not have any idea or there is no professional recording. While only one of the respondents sells album thru personal selling. Figure 3 The sources of income Technological Trends needed by the Unsigned Artist Based on the result of this study, there is a technological advantage that technology may bridge the gap between unsigned artist and music distribution. The unsigned artist is in need of a distribution portal that may help them to distributed music. It is also visible in the result that most of the unsigned artist earn money from Musical Play/Gigs and not in the selling of music. The proposed portal will help the unsigned artist not only to earn revenue from the music but also increase the popularity of the Philippine music industry and also promote the culture of the Philippines. The need for Web in Philippine Unsigned Artist The internet is a means of sharing ideas and connecting with other people. This birthed illegal sharing of files such as Digital music files. Piracy of copyrighted materials has economically affected the music industry; moreover, with the introduction of web development, the music industry should be able to trace its path back. Streaming music through mobile development would allow the lovers of the internet and music to enjoy listening to their favorite artists without harming the industry. Music Piracy will probably to exist [10] who is an advocate of streaming services opined that the websites have begun to reverse the harmful effects of piracy. Currently, the effect of streaming on the music industry is at a neutral level. The internet era, like all periods in the past, promised a redesign of the revenues structure for the recorded music industry. Physical media like CDs continued to decline with the introduction of MP3 which became famous, although it seemed that the MP3 was beautiful to the average consumer as legal music, the late-2000s introduced a new concept of music consumption. Figure 4 Process of the Proposed Portal Based on the result of the study, the unsigned artist is looking for a possible way to promote music. A proposed Online Distribution Portal was conceptualized. Figure 4 shows the general process of the proposed portal. First, the unsigned artist will prepare the MP3 format of their recorded music. Next, the management will approve the music for distribution. After approval, the portal will distribute the music in different music stores. Lastly, a royalty payment will be given to the artist. Overview of the Systems and Conceptual Framework The conceptual framework is shown in Figure 5, where the proposed portal has three account level; this includes the Unsigned Artist account, the Web administrator, and the Manager or the Distribution label. The user or the artist will create an account and will provide information based on the requirement. The administrator approved the account of each of the unsigned artist and verified the authenticity of the account. Next, the unsigned artist will upload music and can monitor the revenue of the submitted music upon the approval of the manager. Lastly, the Administrator will generate payouts for the Unsigned Artist. The proposed portal solves the problems of the unsigned artist regarding music sales, promotion, and marketing with the intervention of technology. To elaborate on the technological intervention of the proposed system. The projected system flow starts with account registration and verification. The Unsigned Artist can Upload music, check account balance and request for pay-out. The proposed system has two-way verification using CAPTCHA in order to avoid bots and attackers to unauthorized access to the portal. It is also proposed that in the payout and account balance section, the Artist can add Paypal Account in order to be paid, funds from the PayPal account can also be withdrawn with any bank accounts locally. A minimal amount of 10% will be deducted from the royalty for the sustainability of the portal. Lastly, in the upload section, the Unsigned Artist, can upload the music and verified by the site administrators. Site administrators will review the submission before submitting to the site partners, and the site partners will report royalty back to the site administrators for payout to the artist. Conclusions and Recommendation With the result of this study include the brief overview of the system, it is concluded that there is a need to develop a portal for an unsigned artist. Based on the result of the study, it is visible that most of the unsigned artist is not aware of distributing music online. Filipino Artist A proposed portal is needed in order for them to distribute music. As the primary objective of this study is to determine the current situation of the unsigned artist in the Philippines, the unsigned artist in the country is struggling to gain popularity. As also shown in the result of the study where Artist earns more from performance and not from the revenue of music itself, where the traditional way of earning income is still present and technological use still not practices. It is recommended that the proposed portal should be developed since it will benefit the unsigned artist to disseminate music. It will also benefit the listeners to discover the music of unsigned. For the other band, this is a ground for collaboration to improve the music industry. This would also help the country to promote music worldwide from an unsigned artist. As for the business part of the proposed portal, this will help the proprietor and owner of the business to gain profit. It is also a promotion ground for the Original Pilipino Music to the world.
2019-06-13T13:22:45.851Z
2019-03-11T00:00:00.000
{ "year": 2019, "sha1": "b5ebbb72c67ddb805089a9d552409829f7d8f1c1", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/482/1/012002", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "79b2f9547063dbc409cd74ef4d3955c31aafc967", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [ "Physics", "Political Science" ] }
264524453
pes2o/s2orc
v3-fos-license
Current Trends of HIV Infection in the Russian Federation Russia remains one of the areas most affected by HIV in Eastern Europe and Central Asia. The aim of this study was to analyze HIV infection indicators and study trends in Russia using data from the Federal Statistic Form No. 61 “Information about HIV infection”. HIV incidence, prevalence, HIV testing and mortality rates (from 2011 to 2022), and treatment success rates (from 2016 to 2022) were analyzed. These indicators were compared across different federal districts (FDs) of Russia. The findings revealed a significant downward trend in HIV incidence, while a significant upward trend was observed for HIV prevalence. The mortality rate has stabilized since 2018. The coverage of HIV testing and antiretroviral therapy increased over time. The number of people living with HIV-1 (PLWH) with a suppressed viral load in Russia as a whole varied between 72% and 77% during the years under observation. The Siberian and Ural federal districts recorded the highest HIV incidence, while the North Caucasian FD reported the lowest. An increase in HIV testing coverage was observed across all FDs. This comprehensive evaluation of HIV infection indicators within the regional context contributes to the timely implementation of measures aimed at preventing the spread of HIV. Introduction Currently, HIV infection continues to pose a significant global public health challenge.According to World Health Organization (WHO) estimates, the global number of people living with HIV (PLWH) was approximately 39.0 million by the end of 2022.The transmission and spread of HIV-1 persist worldwide [1].Since the onset of the global HIV epidemic, its impact has been felt across all sectors of society.Primarily affecting the working-age population, HIV infection diminishes labor resources, thereby influencing various economic processes in countries worldwide [2].However, from the time of the first recorded case of HIV infection to the present, the disease has transitioned from the category of a fatal condition to a chronic and manageable one, primarily due to the development and implementation of antiretroviral drugs in medical practice.The primary aim of antiretroviral treatment (ART) is to prolong the healthy life of the patient.Furthermore, ART enables the suppression of virus transmission by inhibiting its replication in the bodies of each HIV-infected individual [1].Despite its enormous benefits, ART remains a significant source of costs [3]. In 2014, the Joint United Nations (UN) Program on HIV/AIDS (UNAIDS) adopted the "90-90-90" ("95-95-95" since 2018) strategy, with the following aims: 90% of PLWH Viruses 2023, 15, 2156 2 of 15 should be aware of their status; 90% of them should be accessing ART, and 90% of all patients receiving ART should have suppressed viral loads.The goal of this strategy is to significantly reduce the occurrence and spread of new cases of HIV infection in the world and, ultimately, eradicate AIDS by 2030 [4]. The only region in the world where the number of new HIV infections continues to rise is Eastern Europe and Central Asia (EECA) [5].According to the European Center for Disease Prevention and Control (ECDC) and WHO reports, the Russian Federation had the highest rates of newly diagnosed HIV infections in the European region, with a rate of 40.2 per 100,000 population at the end of 2021 [6]. HIV infection in Russia emerged as a significant issue a decade later than in the United States, Western Europe, and Africa, due to the country's relative isolation and limited international migration links.The first substantial increase in incidence, aside from isolated cases and a nosocomial outbreak in the late 1980s, occurred in 1996 and was linked to the introduction of HIV and its rapid spread among injecting drug users (IDUs) [7].Presently, heterosexual transmission has risen, with more than 50% of new HIV infections stemming from unsafe heterosexual contacts [8].The aging of the HIV-infected population has also been noted due to the increased life expectancy resulting from ART [9].Previous studies on HIV infection at the regional level in Russia have primarily focused on molecular genetic analysis of HIV and its drug resistance [7,[10][11][12][13][14][15]. In Russia, HIV infection is classified as a socially significant disease, prompting intensive research on the epidemiology of HIV infection [16].In December 2020, the Government of the Russian Federation implemented a new State Strategy to counter the spread of HIV infection, aligning with the "90-90-90" strategy, aimed at curbing the spread of HIV infection in Russia by continually reducing the number of new cases of HIV-infection and the mortality from HIV/AIDS-associated diseases.The ultimate goal is to eliminate the threat of AIDS to public health by 2030 [17].To facilitate statistical monitoring in healthcare, the Ministry of Health of the Russian Federation maintains the Federal Register of PLWH, using specifically designed forms that allow for the assessment of various epidemiological indicators of HIV/AIDS [18].Thus, Federal Statistic Form No. 61 ("Information about HIV infection") contains information about patients with HIV infection.This study was based on the data from this form. The aim of this study was to analyze the key indicators of HIV infection in the Russian Federation from 2011 to 2022, including by federal districts, and to examine its trends.This marks the first comprehensive examination of the epidemiological situation of HIV infection in the Russian Federation. Datasets Two data sources were used for this study.The Federal Statistical Form No. 61 "Information on HIV infection" provided the necessary information to estimate HIV incidence, prevalence, HIV testing and mortality (from 2011 to 2022), and therapy success rates (from 2016 to 2022).This database encompasses the following annual indicators from the Russian Federation and its federal districts: Viruses 2023, 15, x FOR PEER REVIEW 2 of 15 In 2014, the Joint United Nations (UN) Program on HIV/AIDS (UNAIDS) adopted the "90-90-90" ("95-95-95" since 2018) strategy, with the following aims: 90% of PLWH should be aware of their status; 90% of them should be accessing ART, and 90% of all patients receiving ART should have suppressed viral loads.The goal of this strategy is to significantly reduce the occurrence and spread of new cases of HIV infection in the world and, ultimately, eradicate AIDS by 2030 [4]. The only region in the world where the number of new HIV infections continues to rise is Eastern Europe and Central Asia (EECA) [5].According to the European Center for Disease Prevention and Control (ECDC) and WHO reports, the Russian Federation had the highest rates of newly diagnosed HIV infections in the European region, with a rate of 40.2 per 100,000 population at the end of 2021 [6]. HIV infection in Russia emerged as a significant issue a decade later than in the United States, Western Europe, and Africa, due to the country's relative isolation and limited international migration links.The first substantial increase in incidence, aside from isolated cases and a nosocomial outbreak in the late 1980s, occurred in 1996 and was linked to the introduction of HIV and its rapid spread among injecting drug users (IDUs) [7].Presently, heterosexual transmission has risen, with more than 50% of new HIV infections stemming from unsafe heterosexual contacts [8].The aging of the HIV-infected population has also been noted due to the increased life expectancy resulting from ART [9].Previous studies on HIV infection at the regional level in Russia have primarily focused on molecular genetic analysis of HIV and its drug resistance [7,[10][11][12][13][14][15]. In Russia, HIV infection is classified as a socially significant disease, prompting intensive research on the epidemiology of HIV infection [16].In December 2020, the Government of the Russian Federation implemented a new State Strategy to counter the spread of HIV infection, aligning with the "90-90-90" strategy, aimed at curbing the spread of HIV infection in Russia by continually reducing the number of new cases of HIV-infection and the mortality from HIV/AIDS-associated diseases.The ultimate goal is to eliminate the threat of AIDS to public health by 2030 [17].To facilitate statistical monitoring in healthcare, the Ministry of Health of the Russian Federation maintains the Federal Register of PLWH, using specifically designed forms that allow for the assessment of various epidemiological indicators of HIV/AIDS [18].Thus, Federal Statistic Form No. 61 ("Information about HIV infection") contains information about patients with HIV infection.This study was based on the data from this form. The aim of this study was to analyze the key indicators of HIV infection in the Russian Federation from 2011 to 2022, including by federal districts, and to examine its trends.This marks the first comprehensive examination of the epidemiological situation of HIV infection in the Russian Federation. Datasets Two data sources were used for this study.The Federal Statistical Form No. 61 "Information on HIV infection" provided the necessary information to estimate HIV incidence, prevalence, HIV testing and mortality (from 2011 to 2022), and therapy success rates (from 2016 to 2022).This database encompasses the following annual indicators from the Russian Federation and its federal districts: In 2014, the Joint United Nations (UN) Program on HIV/AIDS (UNAIDS) adopted the "90-90-90" ("95-95-95" since 2018) strategy, with the following aims: 90% of PLWH should be aware of their status; 90% of them should be accessing ART, and 90% of all patients receiving ART should have suppressed viral loads. The goal of this strategy is to significantly reduce the occurrence and spread of new cases of HIV infection in the world and, ultimately, eradicate AIDS by 2030 [4]. The only region in the world where the number of new HIV infections continues to rise is Eastern Europe and Central Asia (EECA) [5].According to the European Center for Disease Prevention and Control (ECDC) and WHO reports, the Russian Federation had the highest rates of newly diagnosed HIV infections in the European region, with a rate of 40.2 per 100,000 population at the end of 2021 [6]. HIV infection in Russia emerged as a significant issue a decade later than in the United States, Western Europe, and Africa, due to the country's relative isolation and limited international migration links.The first substantial increase in incidence, aside from isolated cases and a nosocomial outbreak in the late 1980s, occurred in 1996 and was linked to the introduction of HIV and its rapid spread among injecting drug users (IDUs) [7].Presently, heterosexual transmission has risen, with more than 50% of new HIV infections stemming from unsafe heterosexual contacts [8].The aging of the HIV-infected population has also been noted due to the increased life expectancy resulting from ART [9].Previous studies on HIV infection at the regional level in Russia have primarily focused on molecular genetic analysis of HIV and its drug resistance [7,[10][11][12][13][14][15]. In Russia, HIV infection is classified as a socially significant disease, prompting intensive research on the epidemiology of HIV infection [16].In December 2020, the Government of the Russian Federation implemented a new State Strategy to counter the spread of HIV infection, aligning with the "90-90-90" strategy, aimed at curbing the spread of HIV infection in Russia by continually reducing the number of new cases of HIV-infection and the mortality from HIV/AIDS-associated diseases.The ultimate goal is to eliminate the threat of AIDS to public health by 2030 [17].To facilitate statistical monitoring in healthcare, the Ministry of Health of the Russian Federation maintains the Federal Register of PLWH, using specifically designed forms that allow for the assessment of various epidemiological indicators of HIV/AIDS [18].Thus, Federal Statistic Form No. 61 ("Information about HIV infection") contains information about patients with HIV infection.This study was based on the data from this form. The aim of this study was to analyze the key indicators of HIV infection in the Russian Federation from 2011 to 2022, including by federal districts, and to examine its trends.This marks the first comprehensive examination of the epidemiological situation of HIV infection in the Russian Federation. Datasets Two data sources were used for this study.The Federal Statistical Form No. 61 "Information on HIV infection" provided the necessary information to estimate HIV incidence, prevalence, HIV testing and mortality (from 2011 to 2022), and therapy success rates (from 2016 to 2022).In 2014, the Joint United Nations (UN) Program on HIV/AIDS (UNAIDS) adopted the "90-90-90" ("95-95-95" since 2018) strategy, with the following aims: 90% of PLWH should be aware of their status; 90% of them should be accessing ART, and 90% of all patients receiving ART should have suppressed viral loads. The goal of this strategy is to significantly reduce the occurrence and spread of new cases of HIV infection in the world and, ultimately, eradicate AIDS by 2030 [4]. The only region in the world where the number of new HIV infections continues to rise is Eastern Europe and Central Asia (EECA) [5].According to the European Center for Disease Prevention and Control (ECDC) and WHO reports, the Russian Federation had the highest rates of newly diagnosed HIV infections in the European region, with a rate of 40.2 per 100,000 population at the end of 2021 [6]. HIV infection in Russia emerged as a significant issue a decade later than in the United States, Western Europe, and Africa, due to the country's relative isolation and limited international migration links.The first substantial increase in incidence, aside from isolated cases and a nosocomial outbreak in the late 1980s, occurred in 1996 and was linked to the introduction of HIV and its rapid spread among injecting drug users (IDUs) [7].Presently, heterosexual transmission has risen, with more than 50% of new HIV infections stemming from unsafe heterosexual contacts [8].The aging of the HIV-infected population has also been noted due to the increased life expectancy resulting from ART [9].Previous studies on HIV infection at the regional level in Russia have primarily focused on molecular genetic analysis of HIV and its drug resistance [7,[10][11][12][13][14][15]. In Russia, HIV infection is classified as a socially significant disease, prompting intensive research on the epidemiology of HIV infection [16].In December 2020, the Government of the Russian Federation implemented a new State Strategy to counter the spread of HIV infection, aligning with the "90-90-90" strategy, aimed at curbing the spread of HIV infection in Russia by continually reducing the number of new cases of HIV-infection and the mortality from HIV/AIDS-associated diseases.The ultimate goal is to eliminate the threat of AIDS to public health by 2030 [17].To facilitate statistical monitoring in healthcare, the Ministry of Health of the Russian Federation maintains the Federal Register of PLWH, using specifically designed forms that allow for the assessment of various epidemiological indicators of HIV/AIDS [18].Thus, Federal Statistic Form No. 61 ("Information about HIV infection") contains information about patients with HIV infection.This study was based on the data from this form. The aim of this study was to analyze the key indicators of HIV infection in the Russian Federation from 2011 to 2022, including by federal districts, and to examine its trends.This marks the first comprehensive examination of the epidemiological situation of HIV infection in the Russian Federation.In 2014, the Joint United Nations (UN) Program on HIV/AIDS (UNAIDS) adopted the "90-90-90" ("95-95-95" since 2018) strategy, with the following aims: 90% of PLWH should be aware of their status; 90% of them should be accessing ART, and 90% of all patients receiving ART should have suppressed viral loads. Datasets The goal of this strategy is to significantly reduce the occurrence and spread of new cases of HIV infection in the world and, ultimately, eradicate AIDS by 2030 [4]. The only region in the world where the number of new HIV infections continues to rise is Eastern Europe and Central Asia (EECA) [5].According to the European Center for Disease Prevention and Control (ECDC) and WHO reports, the Russian Federation had the highest rates of newly diagnosed HIV infections in the European region, with a rate of 40.2 per 100,000 population at the end of 2021 [6]. HIV infection in Russia emerged as a significant issue a decade later than in the United States, Western Europe, and Africa, due to the country's relative isolation and limited international migration links.The first substantial increase in incidence, aside from isolated cases and a nosocomial outbreak in the late 1980s, occurred in 1996 and was linked to the introduction of HIV and its rapid spread among injecting drug users (IDUs) [7].Presently, heterosexual transmission has risen, with more than 50% of new HIV infections stemming from unsafe heterosexual contacts [8].The aging of the HIV-infected population has also been noted due to the increased life expectancy resulting from ART [9].Previous studies on HIV infection at the regional level in Russia have primarily focused on molecular genetic analysis of HIV and its drug resistance [7,[10][11][12][13][14][15]. In Russia, HIV infection is classified as a socially significant disease, prompting intensive research on the epidemiology of HIV infection [16].In December 2020, the Government of the Russian Federation implemented a new State Strategy to counter the spread of HIV infection, aligning with the "90-90-90" strategy, aimed at curbing the spread of HIV infection in Russia by continually reducing the number of new cases of HIV-infection and the mortality from HIV/AIDS-associated diseases.The ultimate goal is to eliminate the threat of AIDS to public health by 2030 [17].To facilitate statistical monitoring in healthcare, the Ministry of Health of the Russian Federation maintains the Federal Register of PLWH, using specifically designed forms that allow for the assessment of various epidemiological indicators of HIV/AIDS [18].Thus, Federal Statistic Form No. 61 ("Information about HIV infection") contains information about patients with HIV infection.This study was based on the data from this form. The aim of this study was to analyze the key indicators of HIV infection in the Russian Federation from 2011 to 2022, including by federal districts, and to examine its trends.This marks the first comprehensive examination of the epidemiological situation of HIV infection in the Russian Federation. Datasets Two data sources were used for this study.The Federal Statistical Form No. 61 "Information on HIV infection" provided the necessary information to estimate HIV incidence, prevalence, HIV testing and mortality (from 2011 to 2022), and therapy success rates (from 2016 to 2022).This database encompasses the following annual indicators from the Russian Federation and its federal districts: In 2014, the Joint United Nations (UN) Program on HIV/AIDS (UNAIDS) adopted the "90-90-90" ("95-95-95" since 2018) strategy, with the following aims: 90% of PLWH should be aware of their status; 90% of them should be accessing ART, and 90% of all patients receiving ART should have suppressed viral loads. The goal of this strategy is to significantly reduce the occurrence and spread of new cases of HIV infection in the world and, ultimately, eradicate AIDS by 2030 [4]. The only region in the world where the number of new HIV infections continues to rise is Eastern Europe and Central Asia (EECA) [5].According to the European Center for Disease Prevention and Control (ECDC) and WHO reports, the Russian Federation had the highest rates of newly diagnosed HIV infections in the European region, with a rate of 40.2 per 100,000 population at the end of 2021 [6]. HIV infection in Russia emerged as a significant issue a decade later than in the United States, Western Europe, and Africa, due to the country's relative isolation and limited international migration links.The first substantial increase in incidence, aside from isolated cases and a nosocomial outbreak in the late 1980s, occurred in 1996 and was linked to the introduction of HIV and its rapid spread among injecting drug users (IDUs) [7].Presently, heterosexual transmission has risen, with more than 50% of new HIV infections stemming from unsafe heterosexual contacts [8].The aging of the HIV-infected population has also been noted due to the increased life expectancy resulting from ART [9].Previous studies on HIV infection at the regional level in Russia have primarily focused on molecular genetic analysis of HIV and its drug resistance [7,[10][11][12][13][14][15]. In Russia, HIV infection is classified as a socially significant disease, prompting intensive research on the epidemiology of HIV infection [16].In December 2020, the Government of the Russian Federation implemented a new State Strategy to counter the spread of HIV infection, aligning with the "90-90-90" strategy, aimed at curbing the spread of HIV infection in Russia by continually reducing the number of new cases of HIV-infection and the mortality from HIV/AIDS-associated diseases.The ultimate goal is to eliminate the threat of AIDS to public health by 2030 [17].To facilitate statistical monitoring in healthcare, the Ministry of Health of the Russian Federation maintains the Federal Register of PLWH, using specifically designed forms that allow for the assessment of various epidemiological indicators of HIV/AIDS [18].Thus, Federal Statistic Form No. 61 ("Information about HIV infection") contains information about patients with HIV infection.This study was based on the data from this form. The aim of this study was to analyze the key indicators of HIV infection in the Russian Federation from 2011 to 2022, including by federal districts, and to examine its trends.This marks the first comprehensive examination of the epidemiological situation of HIV infection in the Russian Federation. Datasets Two data sources were used for this study.The Federal Statistical Form No. 61 "Information on HIV infection" provided the necessary information to estimate HIV incidence, prevalence, HIV testing and mortality (from 2011 to 2022), and therapy success rates (from 2016 to 2022).This database encompasses the following annual indicators from the Russian Federation and its federal districts: In 2014, the Joint United Nations (UN) Program on HIV/AIDS (UNAIDS) adopted the "90-90-90" ("95-95-95" since 2018) strategy, with the following aims: 90% of PLWH should be aware of their status; 90% of them should be accessing ART, and 90% of all patients receiving ART should have suppressed viral loads. The goal of this strategy is to significantly reduce the occurrence and spread of new cases of HIV infection in the world and, ultimately, eradicate AIDS by 2030 [4]. The only region in the world where the number of new HIV infections continues to rise is Eastern Europe and Central Asia (EECA) [5].According to the European Center for Disease Prevention and Control (ECDC) and WHO reports, the Russian Federation had the highest rates of newly diagnosed HIV infections in the European region, with a rate of 40.2 per 100,000 population at the end of 2021 [6]. HIV infection in Russia emerged as a significant issue a decade later than in the United States, Western Europe, and Africa, due to the country's relative isolation and limited international migration links.The first substantial increase in incidence, aside from isolated cases and a nosocomial outbreak in the late 1980s, occurred in 1996 and was linked to the introduction of HIV and its rapid spread among injecting drug users (IDUs) [7].Presently, heterosexual transmission has risen, with more than 50% of new HIV infections stemming from unsafe heterosexual contacts [8].The aging of the HIV-infected population has also been noted due to the increased life expectancy resulting from ART [9].Previous studies on HIV infection at the regional level in Russia have primarily focused on molecular genetic analysis of HIV and its drug resistance [7,[10][11][12][13][14][15]. In Russia, HIV infection is classified as a socially significant disease, prompting intensive research on the epidemiology of HIV infection [16].In December 2020, the Government of the Russian Federation implemented a new State Strategy to counter the spread of HIV infection, aligning with the "90-90-90" strategy, aimed at curbing the spread of HIV infection in Russia by continually reducing the number of new cases of HIV-infection and the mortality from HIV/AIDS-associated diseases.The ultimate goal is to eliminate the threat of AIDS to public health by 2030 [17].To facilitate statistical monitoring in healthcare, the Ministry of Health of the Russian Federation maintains the Federal Register of PLWH, using specifically designed forms that allow for the assessment of various epidemiological indicators of HIV/AIDS [18].Thus, Federal Statistic Form No. 61 ("Information about HIV infection") contains information about patients with HIV infection.This study was based on the data from this form. The aim of this study was to analyze the key indicators of HIV infection in the Russian Federation from 2011 to 2022, including by federal districts, and to examine its trends.This marks the first comprehensive examination of the epidemiological situation of HIV infection in the Russian Federation. Datasets Two data sources were used for this study.The Federal Statistical Form No. 61 "Information on HIV infection" provided the necessary information to estimate HIV incidence, prevalence, HIV testing and mortality (from 2011 to 2022), and therapy success rates (from 2016 to 2022).This database encompasses the following annual indicators from the Russian Federation and its federal districts: Statistical Analyses The analyzed intensive indicators were calculated using the following statistical formulas: The prevalence = Number o f PLW H registered at the end o f the year Population •100, 000 The mortality = Number o f H IV − in f ected patients removed f rom the register due to death Population •100, 000 Indicators of therapy success were calculated using the following statistical formulas: The To evaluate 95% confidence intervals, binomial distribution was used as where I-an intensive indicator expressed in persons per 100,000 population. In the second part of the study, the above indicators were compared in different federal districts of the Russian Federation. The trend of the long-term dynamics of incidence, prevalence, and mortality was determined using the method of least squares [20].The alignment of time series was carried out according to the function: where y 1 -rectilinear trend indicator; a-a constant value characterizing the long-term incidence (prevalence or mortality) rate; b-a variable value for each analyzed year, which forms the angle of the trend; x-analyzed time intervals.Statistical significance was evaluated using the F-criterion and the program SPSS Statistics ver.26 (IBM, Armonk, NY, USA).The severity of the trend was considered according to the function: where K = 1 for an odd number of years of observation; K = 2 for an even number of years of observation If |T| >= 5%, the trend was evaluated as pronounced; if 1% <= |T| < 5%, then trend was moderate, and if |T| < 1%, the intensive indicator was stable. To assess differences in the age of HIV-infected individuals in 2011 and 2022, the Kruskal-Wallis criterion from the scipy.statslibrary in Python was applied. Current HIV Infection Trends in the Russian Federation The long-term dynamics of HIV infection incidence and prevalence and PLWH mortality in Russia from 2011 to 2022 are shown below (Figure 1A). Current HIV Infection Trends in the Russian Federation The long-term dynamics of HIV infection incidence and prevalence and PLWH mor tality in Russia from 2011 to 2022 are shown below (Figure 1A).The incidence graph highlights two distinct periods: a slight increase in incidenc rates from 2011 to 2015, followed by a subsequent decrease to 38.17 cases per 100,000 per sons during 2015-2022.A significant (p = 0.029) and pronounced downward trend wa identified, with an average annual growth rate (AAGR) of −5.61%. When analyzing the prevalence of HIV-1 in Russia, a significant (p = 0.001) and pro nounced upward trend was discovered, with an average annual growth rate of 10.96%. On the mortality graph, two distinct periods can be observed: a significant (p < 0.001 increase in the death rate of PLWH from 2011 to 2018, followed by stabilization from 201 onwards, with rates recorded at 19.37 cases per 100,000 persons in 2019, 18.26 in 2020 16.95 in 2021, and 17.98 in 2022. The coverage of HIV testing in Russia increased over time, from 21.8% in 2016 to 32.2% in 2022 (p < 0.001).Although a visual assessment of testing coverage revealed a de cline to 24.6% in 2020, this did not impact the overall upward trend (Figure 1B). ART coverage in Russia experienced a steady increase from 43% in 2016 to 93% in 2022.The number of PLWH with a suppressed viral load fluctuated between 72% and 77% during different years of observation (Figure 1C).The incidence graph highlights two distinct periods: a slight increase in incidence rates from 2011 to 2015, followed by a subsequent decrease to 38.17 cases per 100,000 persons during 2015-2022.A significant (p = 0.029) and pronounced downward trend was identified, with an average annual growth rate (AAGR) of −5.61%. When analyzing the prevalence of HIV-1 in Russia, a significant (p = 0.001) and pronounced upward trend was discovered, with an average annual growth rate of 10.96%. On the mortality graph, two distinct periods can be observed: a significant (p < 0.001) increase in the death rate of PLWH from 2011 to 2018, followed by stabilization from 2018 onwards, with rates recorded at 19.37 cases per 100,000 persons in 2019, 18.26 in 2020, 16.95 in 2021, and 17.98 in 2022. The coverage of HIV testing in Russia increased over time, from 21.8% in 2016 to 32.2% in 2022 (p < 0.001).Although a visual assessment of testing coverage revealed a decline to 24.6% in 2020, this did not impact the overall upward trend (Figure 1B). ART coverage in Russia experienced a steady increase from 43% in 2016 to 93% in 2022.The number of PLWH with a suppressed viral load fluctuated between 72% and 77% during different years of observation (Figure 1C). HIV Infection in Different Federal Districts of Russia The Russian Federation, encompassing eight federal districts (Figure S1), holds one of the largest territories globally.The Siberian and Ural Federal Districts recorded the highest incidence of HIV infection, while the North Caucasian Federal District reported the lowest incidence (Figure 2). HIV Infection in Different Federal Districts of Russia The Russian Federation, encompassing eight federal districts (Figure S1), holds one of the largest territories globally.The Siberian and Ural Federal Districts recorded the highest incidence of HIV infection, while the North Caucasian Federal District reported the lowest incidence (Figure 2).A significant trend toward a decrease in the incidence of HIV was revealed in three federal districts of the Russian Federation: the Ural (p = 0.018; AAGR = −4.92%,moderate), the Northwestern (p < 0.001; AAGR = −7.66%,pronounced) and the Central (p = 0.005; AAGR = −11.01%,pronounced).In two federal districts, the North Caucasian and the Far Eastern, a significant and pronounced (p = 0.023, AAGR = 7.64% and p = 0.005, AAGR = 8.14% respectively) upward trend was revealed. A visual assessment of HIV-1 incidence in the Siberian and the Volga Federal Districts demonstrated a start of decline that commenced in 2015 (Figure 2). The Southern Federal District was characterized by an uneven distribution of incidence with periods of fluctuations (Figure 2). An increase in HIV testing coverage was observed across all federal districts of the Russian Federation, with a single decline in 2020.The Central Federal District achieved the maximum HIV testing coverage, while the North Caucasus Federal District reported the minimum (Figure 3). HIV Infection in the Central Federal District The long-term trends in the incidence of HIV-1 in the Central Federal District mirror those of Russia as a whole.The incidence graph illustrates two distinct periods: an initial phase of increase (2011-2015), with the incidence rising from 33.53 to 49.82 cases per 100,000 persons, followed by a limited decline continuing until 2022 (23.01 cases per 100,000 persons).A significant (p = 0.005) and pronounced downward trend was identified, with an average annual growth rate of −11.01%(Figure 4A).A significant trend toward a decrease in the incidence of HIV was revealed in three federal districts of the Russian Federation: the Ural (p = 0.018; AAGR = −4.92%,moderate), the Northwestern (p < 0.001; AAGR = −7.66%,pronounced) and the Central (p = 0.005; AAGR = −11.01%,pronounced).In two federal districts, the North Caucasian and the Far Eastern, a significant and pronounced (p = 0.023, AAGR = 7.64% and p = 0.005, AAGR = 8.14% respectively) upward trend was revealed. A visual assessment of HIV-1 incidence in the Siberian and the Volga Federal Districts demonstrated a start of decline that commenced in 2015 (Figure 2). The Southern Federal District was characterized by an uneven distribution of incidence with periods of fluctuations (Figure 2). An increase in HIV testing coverage was observed across all federal districts of the Russian Federation, with a single decline in 2020.The Central Federal District achieved the maximum HIV testing coverage, while the North Caucasus Federal District reported the minimum (Figure 3). HIV Infection in Different Federal Districts of Russia The Russian Federation, encompassing eight federal districts (Figure S1), holds one of the largest territories globally.The Siberian and Ural Federal Districts recorded the highest incidence of HIV infection, while the North Caucasian Federal District reported the lowest incidence (Figure 2).A significant trend toward a decrease in the incidence of HIV was revealed in three federal districts of the Russian Federation: the Ural (p = 0.018; AAGR = −4.92%,moderate), the Northwestern (p < 0.001; AAGR = −7.66%,pronounced) and the Central (p = 0.005; AAGR = −11.01%,pronounced).In two federal districts, the North Caucasian and the Far Eastern, a significant and pronounced (p = 0.023, AAGR = 7.64% and p = 0.005, AAGR = 8.14% respectively) upward trend was revealed. A visual assessment of HIV-1 incidence in the Siberian and the Volga Federal Districts demonstrated a start of decline that commenced in 2015 (Figure 2). The Southern Federal District was characterized by an uneven distribution of incidence with periods of fluctuations (Figure 2). An increase in HIV testing coverage was observed across all federal districts of the Russian Federation, with a single decline in 2020.The Central Federal District achieved the maximum HIV testing coverage, while the North Caucasus Federal District reported the minimum (Figure 3). HIV Infection in the Central Federal District The long-term trends in the incidence of HIV-1 in the Central Federal District mirror those of Russia as a whole.The incidence graph illustrates two distinct periods: an initial phase of increase (2011-2015), with the incidence rising from 33.53 to 49.82 cases per 100,000 persons, followed by a limited decline continuing until 2022 (23.01 cases per 100,000 persons).A significant (p = 0.005) and pronounced downward trend was identified, with an average annual growth rate of −11.01%(Figure 4A). HIV Infection in the Central Federal District The long-term trends in the incidence of HIV-1 in the Central Federal District mirror those of Russia as a whole.The incidence graph illustrates two distinct periods: an initial phase of increase (2011-2015), with the incidence rising from 33.53 to 49.82 cases per 100,000 persons, followed by a limited decline continuing until 2022 (23.01 cases per 100,000 persons).A significant (p = 0.005) and pronounced downward trend was identified, with an average annual growth rate of −11.01%(Figure 4A).The prevalence of HIV showed a significant (p < 0.001) pronounced upward trend (AAGR = 9.09%).During the study period, the mortality of people living with HIV (PLWH) also increased significantly (p < 0.001) from 5.11 to 7.82 cases per 100,000 persons, with an AAGR of 7.35% (Figure 4A). The ART coverage increased over time from 50% in 2016 to 96% in 2022 (p < 0.001).The number of PLWH with a suppressed viral load fluctuated between 76% and 81% during different years of observation (Figure 4B). HIV Infection in the Northwestern Federal District A significant, pronounced (p = 0.001; AAGR = 7.77%) trend toward a decrease in the incidence was revealed.At the same time, a visual assessment of the graph showed an uneven distribution of incidence, with minor increases in 2015 (52.48 cases per 100,000 persons) and 2018 (42.78 cases per 100,000 persons) and a sharp decline in 2016 (41.19 cases per 100,000 persons) (Figure 5A).The prevalence of HIV showed a significant (p < 0.001) pronounced upward trend (AAGR = 9.09%).During the study period, the mortality of people living with HIV (PLWH) also increased significantly (p < 0.001) from 5.11 to 7.82 cases per 100,000 persons, with an AAGR of 7.35% (Figure 4A). The ART coverage increased over time from 50% in 2016 to 96% in 2022 (p < 0.001).The number of PLWH with a suppressed viral load fluctuated between 76% and 81% during different years of observation (Figure 4B). HIV Infection in the Northwestern Federal District A significant, pronounced (p = 0.001; AAGR = 7.77%) trend toward a decrease in the incidence was revealed.At the same time, a visual assessment of the graph showed an uneven distribution of incidence, with minor increases in 2015 (52.48 cases per 100,000 persons) and 2018 (42.78 cases per 100,000 persons) and a sharp decline in 2016 (41.19 cases per 100,000 persons) (Figure 5A).The prevalence of HIV showed a significant (p < 0.001) pronounced upward trend (AAGR = 9.09%).During the study period, the mortality of people living with HIV (PLWH) also increased significantly (p < 0.001) from 5.11 to 7.82 cases per 100,000 persons, with an AAGR of 7.35% (Figure 4A). The ART coverage increased over time from 50% in 2016 to 96% in 2022 (p < 0.001).The number of PLWH with a suppressed viral load fluctuated between 76% and 81% during different years of observation (Figure 4B). HIV Infection in the Northwestern Federal District A significant, pronounced (p = 0.001; AAGR = 7.77%) trend toward a decrease in the incidence was revealed.At the same time, a visual assessment of the graph showed an uneven distribution of incidence, with minor increases in 2015 (52.48 cases per 100,000 persons) and 2018 (42.78 cases per 100,000 persons) and a sharp decline in 2016 (41.19 cases per 100,000 persons) (Figure 5A).The prevalence (p < 0.001) and mortality (p = 0.02) rates showed a significant upward trend, with average growth rates of 5.88% (pronounced) and 3.96% (moderate), respectively (Figure 5A).The ART coverage increased over time from 46% in 2016 to 90% in 2022.The number of PLWH with a suppressed viral load fluctuated between 71 and 80% during different years of observation (Figure 5B). HIV Infection in the Ural Federal District A significant moderate (p = 0.018; AAGR = 4.92%) trend toward a decrease in the incidence was observed.In 2015, the incidence of HIV in this federal district reached a peak-135.32cases per 100,000 persons (Figure 6A).The prevalence (p < 0.001) and mortality (p = 0.02) rates showed a significant upward trend, with average growth rates of 5.88% (pronounced) and 3.96% (moderate), respectively (Figure 5A).The ART coverage increased over time from 46% in 2016 to 90% in 2022.The number of PLWH with a suppressed viral load fluctuated between 71 and 80% during different years of observation (Figure 5B). The ART coverage increased over time, from 43% in 2016 to 90% in 2022.The number of PLWH with a suppressed viral load varied from 72% to 82% in different years of observation, with a decrease in rates in 2022 (Figure 6B). The ART coverage increased over time, from 43% in 2016 to 90% in 2022.The number of PLWH with a suppressed viral load varied from 72% to 82% in different years of observation, with a decrease in rates in 2022 (Figure 6B). HIV Infection in the Volga Federal District Two periods can be distinguished on the incidence graph: from 2011 to 2015-A significant, pronounced (p = 0.016, AAGR = 7.28%) trend toward an increase, and from 2015 to 2022-A significant, pronounced (p <0.001; AAGR = −10.60%)trend toward a decrease in the incidence (Figure 7A). ART coverage increased over time, from 44% in 2016 to 96% in 2022.The number of PLWH with a suppressed viral load varied from 64% to 76% in different years of observation, while the lowest rates were noted in 2017 and 2018 (Figure 7B). ART coverage increased over time, from 44% in 2016 to 96% in 2022.The number of PLWH with a suppressed viral load varied from 64% to 76% in different years of observation, while the lowest rates were noted in 2017 and 2018 (Figure 7B). ART coverage increased over time, from 44% in 2016 to 96% in 2022.The number of PLWH with a suppressed viral load varied from 64% to 76% in different years of observation, while the lowest rates were noted in 2017 and 2018 (Figure 7B). ART coverage increased over time, from 30% in 2016 to 94% in 2022, while a onetime decrease in this indicator in 2021 (74%) was observed.The number of PLWH with a suppressed viral load varied from 69% to 77% in different years (Figure 8B).The graph of HIV incidence demonstrated a significant, pronounced (p < 0.001; AAGR = 21.25%)trend toward an increase: from 20.91 cases per 100,000 persons in 2011 to 38.97 cases per 100,000 persons in 2016, followed by a constant decrease in incidence (Figure 9A).95% limits according to the "95-95-95" strategy. ART coverage increased over time, from 30% in 2016 to 94% in 2022, while a one-time decrease in this indicator in 2021 (74%) was observed.The number of PLWH with a suppressed viral load varied from 69% to 77% in different years (Figure 8B). HIV Infection in the Southern Federal District The graph of HIV incidence demonstrated a significant, pronounced (p < 0.001; AAGR = 21.25%)trend toward an increase: from 20.91 cases per 100,000 persons in 2011 to 38.97 cases per 100,000 persons in 2016, followed by a constant decrease in incidence (Figure 9A).The prevalence rates demonstrated a significant, pronounced (p < 0.001; AAGR = 18.81%) upward trend (Figure 9A). The graph of HIV mortality demonstrated a significant, pronounced (p = 0.002; AAGR = 18.07%) trend toward an increase: from 6.49 cases per 100,000 persons in 2011 to 11.92 cases per 100,000 persons in 2018.Since 2018, mortality among PLWH has decreased (Figure 9A). ART coverage increased over time, from 50% in 2016 to 92% in 2022.The number of PLWH with a suppressed viral load varied from 71% to 83% in different years of observation, with a decrease in rates from 2020.The lowest rate was noted in 2017-71% (Figure 9B). The graph of HIV mortality demonstrated a significant, pronounced (p = 0.002; AAGR = 18.07%) trend toward an increase: from 6.49 cases per 100,000 persons in 2011 to 11.92 cases per 100,000 persons in 2018.Since 2018, mortality among PLWH has decreased (Figure 9A). ART coverage increased over time, from 50% in 2016 to 92% in 2022.The number of PLWH with a suppressed viral load varied from 71% to 83% in different years of observation, with a decrease in rates from 2020.The lowest rate was noted in 2017-71% (Figure 9B). ART coverage increased from 44% in 2016 to 92% in 2022, with a one-time decline in 2017 (39%).The number of PLWH with a suppressed viral load varied from 58% to 77% in different years of observation.The lowest rate was noted in 2017-58% (Figure 10B). HIV Infection in the North Caucasian Federal District A significant, pronounced (p = 0.029; AAGR = 7.64%) upward trend of HIV incidence was revealed.In 2015, the incidence of HIV in this federal district reached a peak-15.7 cases per 100,000 persons (Figure 11A).The prevalence rates demonstrated a significant, pronounced (p < 0.001; AAGR = 17.18%) upward trend from 128.23 to 347.83 cases per 100,000 persons.The mortality rates also indicated a significant, pronounced (p < 0.001; AAGR =19.64) upward trend, with a one-time rise in 2019-18.99 cases per 100,000 persons (Figure 10A). ART coverage increased from 44% in 2016 to 92% in 2022, with a one-time decline in 2017 (39%).The number of PLWH with a suppressed viral load varied from 58% to 77% in different years of observation.The lowest rate was noted in 2017-58% (Figure 10B). HIV Infection the North Caucasian Federal District A significant, pronounced (p = 0.029; AAGR = 7.64%) upward trend of HIV incidence was revealed.In 2015, the incidence of HIV in this federal district reached a peak-15.7 cases per 100,000 persons (Figure 11A).The prevalence rates demonstrated a significant, pronounced (p < 0.001; AAGR = 17.18%) upward trend from 128.23 to 347.83 cases per 100,000 persons.The mortality rates also indicated a significant, pronounced (p < 0.001; AAGR =19.64) upward trend, with a one-time rise in 2019-18.99 cases per 100,000 persons (Figure 10A). ART coverage increased from 44% in 2016 to 92% in 2022, with a one-time decline in 2017 (39%).The number of PLWH with a suppressed viral load varied from 58% to 77% in different years of observation.The lowest rate was noted in 2017-58% (Figure 10B). HIV Infection in the North Caucasian Federal District A significant, pronounced (p = 0.029; AAGR = 7.64%) upward trend of HIV incidence was revealed.In 2015, the incidence of HIV in this federal district reached a peak-15.7 cases per 100,000 persons (Figure 11A).The prevalence rates demonstrated a significant, pronounced (p < 0.001; AAGR = 16.57)upward trend from 40.96 to 114.30 cases per 100,000 persons.The mortality rates also indicated a significant, pronounced (p < 0.001; AAGR = 16.57)upward trend from 2.13 to 3.91 cases per 100,000 persons.A visual assessment showed a stabilization of mortality rates since 2019 (Figure 11A). ART coverage increased from 52% in 2016 to 99% in 2022.The number of PLWH with a suppressed viral load varied from 67% to 75% in different years of observation (Figure 11B).ART coverage increased from 52% in 2016 to 99% in 2022.The number of PLWH with a suppressed viral load varied from 67% to 75% in different years of observation (Figure 11B). Trend of Long-Term Dynamics of HIV Infection in the Russian Federation Trend lines of long-term dynamics of HIV infection incidence and prevalence and PLWH mortality in Russia from 2011 to 2025 are shown below (Figure 12).An assessment of HIV theoretical incidence revealed a significant, pronounced trend towards its decrease (p < 0.001; AAGR = −6.43%).The difference between the theoretical indicators of the first and last years was 21.52 per 100,000 persons (or 36.01%). An assessment of HIV theoretical prevalence revealed a significant, pronounced (p < 0.001; AAGR = 11.05%)upward trend.According to the trend line, HIV prevalence in Russia by 2025 will increase by 604.59 per 100,000 persons. The PLWH mortality trend line showed a significant (p < 0.001) upward trend.The average annual growth rate was 9.15%. Discussion The epidemiology of HIV infection in Russia possesses distinct characteristics.The Russian Federation, one of the world s largest countries, comprises eight federal districts, with significant variations in socioeconomic and demographic indicators [21].These differences impact the spread and treatment of HIV infection. Furthermore, there are unique aspects to the ways HIV infection spreads in Russia.The global HIV epidemic within the Russian Federation, starting in the mid-1990s, primarily affected IDUs [8].While new cases of HIV infection have been associated with heterosexual transmission in the second half of the 2010s, the parenteral route remains a significant mode of transmission [8].This presents challenges in HIV treatment, particularly in maintaining adherence among PLWH from the cohort of IDUs [22]. Currently, various researchers are publishing discrete data on HIV prevalence in Russia based on different sources.This study represents the first comprehensive and large-scale analysis of current trends in HIV infection in Russia, encompassing both epidemiological indicators and measures of therapy success.These indicators were analyzed in all federal districts of the Russian Federation. The findings of this study demonstrated a significant and consistent decrease in the number of new HIV infection cases in Russia from 2015 to the present, aligning with previous research findings [23,24].Additionally, a notable increase in the coverage of HIV testing was observed throughout the observation period (2011-2022).There was a single An assessment of HIV theoretical incidence revealed a significant, pronounced trend towards its decrease (p < 0.001; AAGR = −6.43%).The difference between the theoretical indicators of the first and last years was 21.52 per 100,000 persons (or 36.01%). An assessment of HIV theoretical prevalence revealed a significant, pronounced (p < 0.001; AAGR = 11.05%)upward trend.According to the trend line, HIV prevalence in Russia by 2025 will increase by 604.59 per 100,000 persons. The PLWH mortality trend line showed a significant (p < 0.001) upward trend.The average annual growth rate was 9.15%. Discussion The epidemiology of HIV infection in Russia possesses distinct characteristics.The Russian Federation, one of the world's largest countries, comprises eight federal districts, with significant variations in socioeconomic and demographic indicators [21].These differences impact the spread and treatment of HIV infection. Furthermore, there are unique aspects to the ways HIV infection spreads in Russia.The global HIV epidemic within the Russian Federation, starting in the mid-1990s, primarily affected IDUs [8].While new cases of HIV infection have been associated with heterosexual transmission in the second half of the 2010s, the parenteral route remains a significant mode of transmission [8].This presents challenges in HIV treatment, particularly in maintaining adherence among PLWH from the cohort of IDUs [22]. Currently, various researchers are publishing discrete data on HIV prevalence in Russia based on different sources.This study represents the first comprehensive and large-scale analysis of current trends in HIV infection in Russia, encompassing both epidemiological indicators and measures of therapy success.These indicators were analyzed in all federal districts of the Russian Federation. The findings of this study demonstrated a significant and consistent decrease in the number of new HIV infection cases in Russia from 2015 to the present, aligning with previous research findings [23,24].Additionally, a notable increase in the coverage of HIV testing was observed throughout the observation period (2011-2022).There was a single dip in testing coverage in 2020, likely associated with the onset of the COVID-19 pandemic. The Siberian and Ural Federal Districts have reported the highest incidence of HIV infection, with a historical background explaining this phenomenon.During the late 1990s, a significant number of (IDUs) emerged in these regions.The Perm Territory, which was part of the Ural Federal District until 2000 and since 2000 has been part of the Volga Federal District, and the Irkutsk Region in the Siberian Federal District were among the first regions within the Russian Federation to register cases of HIV infection among IDUs [7,25,26].This initial development set the stage for the current challenging situation regarding HIV-1 incidence in Siberia and the Urals.Previous research has also highlighted the highest mortality and HIV prevalence rates in the Siberian and Ural Federal Districts [27].Notably, a decrease in HIV incidence has been observed in these federal districts since 2015.Simultaneously, all federal districts in Russia have continued to experience an upward trend in HIV prevalence, which can be associated with the increased life expectancy of PLWH.The median age of PLWH rose from 31 in 2011 to 41 in 2022 (p < 0.001) (refer to Figure S2). The findings of this study also indicate a decline in the mortality rate since 2018.Additionally, there is an observed "aging" of the HIV-infected population, with an increase in the age of individuals who have died of HIV infection (p < 0.001) (Figures S2 and S3). A decrease in the incidence of HIV was observed in all federal districts, except for the Far East and North Caucasus Federal Districts.The Far East Federal District, due to its remote location from the primary territories of Russia, exhibits unique socio-demographic characteristics.A study on migration and demographic patterns in the Russian Far East highlighted a consistent population decline, primarily attributed to migration outflow.The Far East's share of the total Russian population decreased from 5.4% in 1991 to 4.3% in 2020, with 3.8% of its total population leaving the region in 2020 [28].This population decrease may be correlated with the observed increase in HIV incidence in this federal district. Conversely, the North Caucasian region is characterized by the prevalence of traditional values compared to other federal districts, leading to stigma surrounding HIV and inhibiting openness among HIV-infected individuals about their status and timely medical registration.Previous studies have noted the North Caucasian Federal District's lowest HIV testing rates (19.4% of district residents in 2017) [29].These findings suggest a natural increase in HIV incidence in this region. The Southern Federal District exhibited fluctuating incidence rates, likely influenced by its high migration appeal and tourist influx. Since 2017, the Ministry of Health of the Russian Federation has recommended administering antiretroviral therapy to all PLWH, irrespective of CD4 cell count and viral-load levels [30]. Previous reports have indicated a significant increase in ART coverage in Russia from 4% in 2006 to 58% in 2021 [31].The development and introduction of Russianproduced antiretroviral drugs have facilitated broader access to ART for PLWH in the country, contributing to increased life expectancy.To date, 40 international non-proprietary names of drugs for the treatment of HIV infection have been registered in Russia.For the possibility of providing ART to all PLWH in Russia, antiretroviral drugs produced in Russia have been developed and actively introduced into clinical practice [32].Since 2020, elsulfavirin, a non-nucleoside reverse transcriptase inhibitor developed in Russia, has been included in preferred first-line ART regimens in Russia [30,33]. As a result, in all federal districts, the share of PLWH receiving ART substantially increased between 2016 and 2022.Notably, in the Central, Volga, and North Caucasian Federal Districts in 2022, the share of PLWH receiving ART was more than 95%.In the Northwestern, Ural, Siberian, Far Eastern, and Southern Federal Districts in 2022, the share of PLWH receiving ART was between 90 and 94%.Generally, in Russia, the share of PLWH receiving ART increased from 43% in 2016 to 93% in 2022.This widespread availability of antiretroviral therapy has contributed to extending the lives of PLWH, resulting in an increase in the prevalence of HIV infection and the stabilization of mortality rates among HIV-infected patients by the end of the observation period. Throughout the observation period, the proportion of PLWH with suppressed viral loads in Russia ranged from 72% to 77%.In 2022, the highest percentages of PLWH with suppressed viral loads were recorded in the Central, Northwestern, and Southern Federal Districts, accounting for 81%, 78%, and 76%, respectively.In the Ural, Volga, and North Caucasian Federal Districts, this proportion ranged from 74% to 75%.Conversely, the Siberian and Far Eastern Federal Districts reported the lowest proportions of PLWH with suppressed viral loads in 2022, at 69%.Given the significant proportion of PLWH with a history of injecting drugs in Russia, ensuring adherence to treatment remains a pertinent issue.To address this, long-acting injection therapies with bimonthly injections have been introduced in Europe, offering promising solutions for improving adherence [34].In Russia, long-acting injectable drugs have recently been registered: Recambis (Reg.No. LP-No.(001678)-(RG-RU) from 01.17.2023) and Vocabria (Reg.No. LP-No.( 001504)-(RG-RU) from 05.12.2022).Furthermore, there is ongoing development of long-acting injectable drugs within Russia, indicating progress in this domain [32]. The identified trends in the long-term dynamics of HIV-1 in the Russian Federation until 2025 indicate a decrease in the incidence of HIV with an increase in prevalence and mortality.The obtained predictive trend lines correspond to these indicators of HIV infection at its present stage in the Russian Federation and can be explained as follows: the use of highly effective ART leads to an increase in the life expectancy of PLWH, contributing to a rise in the prevalence of HIV.At the same time, the aging of the HIV-infected population over time results in a natural process of aging and death.It is also worth noting that the Federal Statistic Form No. 61 does not consider the causes of death of PLWH, implying that it might result from various factors other than HIV infection and AIDS, such as chronic diseases and accidents. Such a comprehensive study of HIV infection indicators in the Russian Federation within a regional context not only helps in assessing the overall situation in the country but also aids in identifying regions that have a less favorable epidemiological situation.This identification enables the timely implementation of epidemiological measures and organizational arrangements to prevent the spread of HIV. Figure 1 . Figure 1.Intensive indicators of HIV infection in the Russian Federation.(A) Indicators of incidence prevalence, and mortality.The left y-axis is relevant for incidence and mortality, and the right y axis is relevant for prevalence.(B) Indicator of HIV testing.(C) Indicators of ART.Stacked column chart: the proportion of the PLWH receiving ART and the proportion of patients with suppressed VL.The red lines indicate 95% limits according to the "95-95-95" strategy. Figure 1 . Figure 1.Intensive indicators of HIV infection in the Russian Federation.(A) Indicators of incidence, prevalence, and mortality.The left y-axis is relevant for incidence and mortality, and the right y-axis is relevant for prevalence.(B) Indicator of HIV testing.(C) Indicators of ART.Stacked column chart: the proportion of the PLWH receiving ART and the proportion of patients with suppressed VL.The red lines indicate 95% limits according to the "95-95-95" strategy. Figure 2 . Figure 2. Incidence of HIV infection in the federal districts of the Russian Federation. Figure 3 . Figure 3. Coverage of HIV testing in the federal districts of the Russian Federation. Figure 2 . Figure 2. Incidence of HIV infection in the federal districts of the Russian Federation. Figure 2 . Figure 2. Incidence of HIV infection in the federal districts of the Russian Federation. Figure 3 . Figure 3. Coverage of HIV testing in the federal districts of the Russian Federation. Figure 3 . Figure 3. Coverage of HIV testing in the federal districts of the Russian Federation. Figure 4 . Figure 4. HIV infection in the Central Federal District.(A) Intensive indicators of the epidemic process of HIV infection.The left y-axis is relevant for incidence and mortality, and the right y-axis is relevant for prevalence.(B) Antiretroviral therapy.Stacked column chart: the proportion of the PLWH receiving ART and the proportion of patients with suppressed VL.The red lines indicate 95% limits according to the "95-95-95" strategy. Figure 5 . Figure 5. HIV infection in the Northwestern Federal District.(A) Intensive indicators of the epidemic process of HIV infection.The left y-axis is relevant for incidence and mortality, and the right y-axis is relevant for prevalence.(B) Antiretroviral therapy.Stacked column chart: the proportion of the Figure 4 . Figure 4. HIV infection in the Central Federal District.(A) Intensive indicators of the epidemic process of HIV infection.The left y-axis is relevant for incidence and mortality, and the right y-axis is relevant for prevalence.(B) Antiretroviral therapy.Stacked column chart: the proportion of the PLWH receiving ART and the proportion of patients with suppressed VL.The red lines indicate 95% limits according to the "95-95-95" strategy. Figure 4 . Figure 4. HIV infection in the Central Federal District.(A) Intensive indicators of the epidemic process of HIV infection.The left y-axis is relevant for incidence and mortality, and the right y-axis is relevant for prevalence.(B) Antiretroviral therapy.Stacked column chart: the proportion of the PLWH receiving ART and the proportion of patients with suppressed VL.The red lines indicate 95% limits according to the "95-95-95" strategy. Figure 5 . Figure 5. HIV infection in the Northwestern Federal District.(A) Intensive indicators of the epidemic process of HIV infection.The left y-axis is relevant for incidence and mortality, and the right y-axis is relevant for prevalence.(B) Antiretroviral therapy.Stacked column chart: the proportion of the Figure 5 . Figure 5. HIV infection in the Northwestern Federal District.(A) Intensive indicators of the epidemic process of HIV infection.The left y-axis is relevant for incidence and mortality, and the right y-axis is relevant for prevalence.(B) Antiretroviral therapy.Stacked column chart: the proportion of the PLWH receiving ART and the proportion of patients with suppressed VL.The red lines indicate 95% limits according to the "95-95-95" strategy. Figure 6 . Figure 6.HIV infection in the Ural Federal District.(A) Intensive indicators of the epidemic process of HIV infection.The left y-axis is relevant for incidence and mortality, and the right y-axis is relevant for prevalence.(B) Antiretroviral therapy.Stacked column chart: the proportion of the PLWH receiving ART and the proportion of patients with suppressed VL.The red lines indicate 95% limits according to the "95-95-95" strategy. Figure 6 . Figure 6.HIV infection in the Ural Federal District.(A) Intensive indicators of the epidemic process of HIV infection.The left y-axis is relevant for incidence and mortality, and the right y-axis is relevant for prevalence.(B) Antiretroviral therapy.Stacked column chart: the proportion of the PLWH receiving ART and the proportion of patients with suppressed VL.The red lines indicate 95% limits according to the "95-95-95" strategy. Figure 7 . Figure 7. HIV infection in the Volga Federal District.(A) Intensive indicators of the epidemic process of HIV infection.The left y-axis is relevant for incidence and mortality, and the right y-axis is relevant for prevalence.(B) Antiretroviral therapy.Stacked column chart: the proportion of the PLWH receiving ART and the proportion of patients with suppressed VL.The red lines indicate 95% limits according to the "95-95-95" strategy. Figure 7 . Figure 7. HIV infection in the Volga Federal District.(A) Intensive indicators of the epidemic process of HIV infection.The left y-axis is relevant for incidence and mortality, and the right y-axis is relevant for prevalence.(B) Antiretroviral therapy.Stacked column chart: the proportion of the PLWH receiving ART and the proportion of patients with suppressed VL.The red lines indicate 95% limits according to the "95-95-95" strategy. Figure 7 . Figure 7. HIV infection in the Volga Federal District.(A) Intensive indicators of the epidemic process of HIV infection.The left y-axis is relevant for incidence and mortality, and the right y-axis is relevant for prevalence.(B) Antiretroviral therapy.Stacked column chart: the proportion of the PLWH receiving ART and the proportion of patients with suppressed VL.The red lines indicate 95% limits according to the "95-95-95" strategy. Figure 8 . Figure 8. HIV infection in the Siberian Federal District.(A) Intensive indicators of the epidemic process of HIV infection.The left y-axis is relevant for incidence and mortality, and the right y-axis is relevant for prevalence.(B) Antiretroviral therapy.Stacked column chart: the proportion of the PLWH receiving ART and the proportion of patients with suppressed VL.The red lines indicate 95% limits according to the "95-95-95" strategy. 6. HIV Infection in the Southern Federal District Figure 9 . Figure 9. HIV infection in the Southern Federal District.(A) Intensive indicators of the epidemic process of HIV infection.The left y-axis is relevant for incidence and mortality, and the right y-axis is relevant for prevalence.(B) Antiretroviral therapy.Stacked column chart: the proportion of the PLWH receiving ART and the proportion of patients with suppressed VL.The red lines indicate 95% limits according to the "95-95-95" strategy. Figure 9 . Figure 9. HIV infection in the Southern Federal District.(A) Intensive indicators of the epidemic process of HIV infection.The left y-axis is relevant for incidence and mortality, and the right y-axis is relevant for prevalence.(B) Antiretroviral therapy.Stacked column chart: the proportion of the PLWH receiving ART and the proportion of patients with suppressed VL.The red lines indicate 95% limits according to the "95-95-95" strategy. Figure 10 . Figure 10.HIV infection in the Far Eastern Federal District.(A) Intensive indicators of the epidemic process of HIV infection.The left y-axis is relevant for incidence and mortality, and the right y-axis is relevant for prevalence.(B) Antiretroviral therapy.Stacked column chart: the proportion of the PLWH receiving ART and the proportion of patients with suppressed VL.The red lines indicate 95% limits according to the "95-95-95" strategy. Figure 11 . Figure 11.HIV infection in the North Caucasian Federal District.(A) Intensive indicators of the epidemic process of HIV infection.The left y-axis is relevant for incidence and mortality, and the right y-axis is relevant for prevalence.(B) Antiretroviral therapy.Stacked column chart: the proportion of the PLWH receiving ART and the proportion of patients with suppressed VL.The red lines indicate 95% limits according to the "95-95-95" strategy. Figure 10 . Figure 10.HIV infection in the Far Eastern Federal District.(A) Intensive indicators of the epidemic process of HIV infection.The left y-axis is relevant for incidence and mortality, and the right y-axis is relevant for prevalence.Antiretroviral therapy.Stacked column chart: the proportion of the PLWH receiving ART and the proportion of patients with suppressed VL.The red lines indicate 95% limits according to the "95-95-95" strategy. Figure 10 . Figure 10.HIV infection in the Far Eastern Federal District.(A) Intensive indicators of the epidemic process of HIV infection.The left y-axis is relevant for incidence and mortality, and the right y-axis is relevant for prevalence.(B) Antiretroviral therapy.Stacked column chart: the proportion of the PLWH receiving ART and the proportion of patients with suppressed VL.The red lines indicate 95% limits according to the "95-95-95" strategy. Figure 11 . Figure 11.HIV infection in the North Caucasian Federal District.(A) Intensive indicators of the epidemic process of HIV infection.The left y-axis is relevant for incidence and mortality, and the right y-axis is relevant for prevalence.(B) Antiretroviral therapy.Stacked column chart: the proportion of the PLWH receiving ART and the proportion of patients with suppressed VL.The red lines indicate 95% limits according to the "95-95-95" strategy. Figure 11 . Figure 11.HIV infection in the North Caucasian Federal District.(A) Intensive indicators of the epidemic process of HIV infection.The left y-axis is relevant for incidence and mortality, and the right y-axis is relevant for prevalence.(B) Antiretroviral therapy.Stacked column chart: the proportion of the PLWH receiving ART and the proportion of patients with suppressed VL.The red lines indicate 95% limits according to the "95-95-95" strategy. 3. 3 . Trend of Long-Term Dynamics of HIV Infection in the Russian FederationTrend lines of long-term dynamics of HIV infection incidence and prevalence and PLWH mortality in Russia from 2011 to 2025 are shown below (Figure12). Figure 12 . Figure 12.Trend lines of long-term dynamics of HIV infection in Russia.The scale on the left (black) reflects the values for the incidence and mortality trend line.The scale on the left (red) reflects values for the prevalence of HIV-1. Figure 12 . Figure 12.Trend lines of long-term dynamics of HIV infection in Russia.The scale on the left (black) reflects the values for the incidence and mortality trend line.The scale on the left (red) reflects values for the prevalence of HIV-1. of PLWH registered at the end of the year; The website of the Federal State Statistics Service of the Russian Federation provided data on the population of Russia as a whole and by federal district[19].number website of the Federal State Statistics Service of the Russian Federation provided data on the population of Russia as a whole and by federal district [19]. number of new HIV infections;  number of PLWH registered at the end of the year;  number of HIV-infected patients removed from the register due to death;  number of persons tested for HIV-1;  number of PLWH receiving ART;  number of patients with suppressed viral load (VL).The website of the Federal State Statistics Service of the Russian Federation provided data on the population of Russia as a whole and by federal district [19].number of patients with suppressed viral load (VL). proportion o f PLW H receiving ART =
2023-10-28T15:08:11.018Z
2023-10-26T00:00:00.000
{ "year": 2023, "sha1": "13364d34dabd88e5df10d6b875d70270df39df38", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4915/15/11/2156/pdf?version=1698295036", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a8d458b65024dd155c504da2c2e1285fd75e47b0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
253097136
pes2o/s2orc
v3-fos-license
Association between serum ferritin levels and colorectal cancer risk in Korea Background/Aims The concentration of serum ferritin, a storage form of iron, may be associated with carcinogenesis in various cancers. There are only limited studies on the relationship between serum ferritin levels and colorectal cancer (CRC) risk, especially in the Asian population. This study aimed to analyze the association between CRC incidence and serum ferritin levels. Methods This was a national cohort study that used health checkup and insurance claims data of the Korean population. CRC incidence according to the serum ferritin level was analyzed during 2008–2018 in 17,116 participants. Results The hazard ratio (HR) of CRC incidence decreased as serum ferritin levels increased (Q1: HR, 1.000 [95% confidence interval [CI], reference]; Q2: HR, 0.811 [95% CI, 0.558 to 1.178]; Q3: HR, 0.654 [95% CI, 0.442 to 0.968]; Q4: HR, 0.443 [95% CI, 0.285 to 0.687]; p = 0.0026). In subgroup analysis, 40 to 64 years of age, sex, body mass index of < 25 kg/ m2, presence of metabolic syndrome, absence of diabetes mellitus, and absence of anemia had HRs of < 0.5 (95% CI) in the highest quartiles compared with that in the lowest quartiles. Conclusions This study shows an inverse association between serum ferritin and CRC risk. Serum ferritin measurement can aid in identifying young adults requiring active CRC screening. INTRODUCTION Colorectal cancer (CRC) is the fourth most common cancer worldwide. It was newly diagnosed among 147,950 patients in the United States in 2020 and among 27,909 patients (54.4 patients per 100,000 people) in Korea based on national cancer statistics in 2018 [1,2]. Risk factors for the incidence of CRC include lifestyle-related factors such as high-fat diet, obesity, lack of physical activity, smoking, and alcohol consumption [3]. Red meat consumption is also known to be a strong risk factor [3], which can be considered in relation to the dietary iron level. Iron in red meat is present in the form of heme iron, and CRC risk increases according to heme iron intake [4]. Iron, which has the ability to transfer unpaired electrons, is a key player in oxidation-reduction (redox) reactions. The oxidation states vary, and iron exhibits an oxidation state from −2 to 6. Because of the flexibility in accepting different oxidation states that allows iron to interact with various ligands, iron is essential for sustaining life [5]. However, this ability of iron generates a large amount of hydroxyl radicals, which sometimes cause DNA damage and drive carcinogenesis [5]. This suggests that iron overload in the human body is related to carcinogenesis development. In a study of 14,407 people who were observed for 10 years, 858 patients diagnosed with cancer had higher transferrin saturation and lower total iron-binding capacity than people without cancer; similar results have been reported in other studies [6,7]. In case of excessive levels of systemic iron, iron is sequestered and stored to prevent toxicity. Ferritin is a major iron storage protein that plays a critical role in the maintenance of systemic iron homeostasis [8]. Therefore, serum ferritin levels generally decrease during iron deficiency and increase during iron overload [8]. Based on these concepts, the positive correlation between cancer risk and serum ferritin level is a predictable hypothesis. However, in one meta-analysis, the serum ferritin level in CRC patients was lower than that in healthy patients, contrary to the expected result [9]. There are only limited studies on the association between serum ferritin levels and CRC risk, especially in the Asian population. This study aimed to analyze the association between CRC risk and serum ferritin levels in the Korean population using linkage data from the 2008 to 2012 Korea National Health and Nutrition Examination Survey (KNHANES) and the National Health Insurance Services (NHIS) claims database. www.kjim.org https://doi.org/10.3904/kjim.2022.007 tionalized Korean population [10]. Korea's NHIS is a social insurance payment system that covers approximately 97% of the Korean population. The NHIS data include information of all national routine health examinations and claims data. Claims data include diagnostic codes as per the International Classification of Diseases, 10th revision (ICD-10), disease coding system [11]. This national cohort study used KNHANES data collected during 2008 to 2012. Adults aged more than 40 years who had undergone blood tests for the measurement of serum ferritin levels were included in the study. Participants were healthy people without acute illness or severe comorbidities such as end-stage renal disease. We excluded individuals who were under menstrual status at the time of investigation, who had missing data, and who were previously diagnosed with CRC or any other cancer. Eligible subjects selected from the KNHANES database were merged with those from the NHIS database to create a cohort dataset. To evaluate newly diagnosed CRC cases, we used cohort data from 2008 with clinical follow-up through to December 31, 2018. Informed consent of participants was not required because this study used data from the national health database, where consent had previously been obtained for the use of the collected data for research. The Institutional Review Board of The Catholic University of Korea (IRB No. HC21ZISI0063) approved this study. The study was conducted in compliance with the principles of the Declaration of Helsinki. Laboratory measurements and survey of lifestyle-related factors Details of the KNHANES framework regarding the content of health surveys, standardized physical examinations, laboratory tests, and definitions of risk factors have been described previously [11]. Among the selected participants, specialists performed physical examinations, including body mass index (BMI) calculation and waist circumference measurement, according to standardized methods. Smoking status was divided into three categories: non-smoker, ex-smoker, or current smoker. Alcohol consumption was assessed based on the average number of alcoholic beverages and frequency of alcohol consumption. Heavy drinkers were defined as participants who consumed more than 30 g/day, whereas mild-to-moderate drinkers were defined as participants drinking less than 30 g/day [12]. Physical activity was defined as walking for at least 150 min/week [12]. Diabetes mellitus (DM), hypertension, and hypercholesterolemia were defined as a fasting glucose level of ≥ 126 mg/ dL, systolic blood pressure (BP) of ≥ 140 mmHg or diastolic BP of ≥ 90 mmHg, and total cholesterol level of ≥ 240 mg/ dL, respectively. Triglyceride, high-density lipoprotein cholesterol, ferritin, and hemoglobin (Hb) levels were obtained from serum or plasma samples at the time of enrollment in the KNHANES. Anemia was defined as a Hb level of < 13.0 g/dL for men and < 12.0 g/dL for women. The criteria for metabolic syndrome were based on a recent study [13]. Clinical outcomes The primary outcome was newly diagnosed CRC during the follow-up period. Since 2005, the Korean government has implemented policies to expand NHIS benefit coverage to provide financial protection against life-threatening and catastrophic diseases such as cancer, cerebrovascular disease, and heart disease. When a patient is registered for cancer in the NHIS system, the patient is assigned a special code (V193). Therefore, if the participants of this study were newly diagnosed with CRC based on imaging or pathology findings, then they were registered in the NHIS system with the V code assignment. We identified patients who were diagnosed with CRC during the follow-up period using the ICD-10 codes (C18, C19, C20) among the participants who were assigned the V codes (V193), according to protocols established in a previous study [13]. Statistical analysis Summary statistics were expressed as means and standard deviations for continuous variables and as numbers and percentages for categorical variables. Continuous variables were compared using Student's t test or analysis of variance, as appropriate. Categorical variables were compared using the chi-squared test. The incidence of CRC was calculated by dividing the number of CRC patients by the sum of the follow-up duration, presented as the rate per 1,000 person-years. Participants were observed on follow-up until the first diagnosis of CRC, censoring by death, or December 31, 2018, whichever occurred first. Clinical outcomes were determined using the Kaplan-Meier method and compared using the log-rank test. Cox proportional-hazard models were used to analyze the association of serum ferritin levels with CRC risk. Hazard ratios (HRs) and 95% confidence intervals (CIs) were also calculated. Statistical significance was set at p < 0.05. Multivariable regression models were constructed with non-adjustment (Model 1), adjustment for age, sex, smoking, alcohol consumption, and exercise (Model 2), and inclusion of the variables in Model 2 plus the presence of BMI, DM, and Hb level (Model 3). All statistical analyses were performed using SAS version 9.4 (SAS Institute, Cary, NC, USA). Availability of data and materials Individual participant data will not be shared because access to the data requires legitimate administrative approval. The data access use is restricted until December 2021 in accordance with government regulations. Study participants In total, 20,688 people were registered through the KN-HANES system during 2008 to 2012, and the datasets were linked to the NHIS system in 2018. After excluding participants who were menstruating, who had missing data including serum ferritin levels, and who were previously diagnosed with any cancer, 17,116 participants were analyzed ( Fig. 1). Table 1 shows the baseline characteristics of the study participants. A higher proportion of participants with CRC were male; were older; and had a history of smoking, DM, and hypertension. The mean serum ferritin level was lower in patients with CRC than in those without CRC, but the difference was not significant. Table 2 shows the baseline characteristics of participants according to the quartiles of serum ferritin level, with cutoff values of 36.33, 66, and 111.14 ng/mL, respectively. The mean ferritin level in each quartile was 19.96, 50.68, 86.07, and 209.03 ng/mL, respectively. There were significant differences in the proportions of age, sex, smoking history, alcohol consumption, BMI, DM, and Hb levels between quartiles of serum ferritin levels ( Table 2). A higher ferritin quartile showed a higher proportion of CRC risk factors such as smoking history, alcohol consumption, DM, and obesity ( Table 2). Table 3 presents the HRs and 95% CIs for the incidence of CRC according to the serum ferritin level using multiple regression analysis. The unadjusted HR was 0.796 (95% CI, 0.534 to 1.187) in the highest quartile compared with that in the lowest quartile (Model 1). When age, sex, smoking, alcohol consumption, and physical activity were controlled (Model 2), the adjusted HR of the highest quartile was 0.471 (95% CI, 0.306 to 0.722) in comparison to that of the lowest quartile. With additional adjustment for BMI, DM, and Hb level (Model 3), the adjusted HR was 0.443 (95% CI, 0.285 to 0.687). Interestingly, the higher quartile had a lower HR (Table 3). The quartile of ferritin levels showed an inverse relationship with CRC risk. In particular, after adjustment for multiple covariables, Q3 and Q4 of ferritin levels were significantly associated with a low risk of CRC (HR, 0.65 [95% CI, 0.44 to 0.97]; and HR, 0.44 [95% CI, 0.29 to 0.69]; respectively). The cumulative incidence curves of CRC according to serum ferritin levels are shown in Fig. 2. In the highest quartile, the curve was distinctly separated from other quartiles (Fig. 2), and the incidence (1.23 per 1,000 people) was also lower than that in other quartiles (Table 3). Subgroup analysis: CRC risk and serum ferritin levels We evaluated the association between CRC risk and serum ferritin levels in subgroups of age, sex, BMI, metabolic syn-drome, DM, and anemia (Table 4). In each subgroup, the serum ferritin level was significantly associated with CRC risk in the age group of 40 to 64 years, males, BMI of < 25 kg/ m 2 , presence of metabolic syndrome, and absence of DM DISCUSSION This is the largest cohort study to analyze the relationship between serum ferritin levels and CRC risk in the Korean population. Serum ferritin levels were inversely associated with CRC risk. In subgroup analysis, there were significant differences in patients with young age and without obesity, DM, and anemia when compared with the other groups. Particularly, in men, a higher ferritin level showed a lower HR, but there was no correlation in women; hence, the sexbased difference was clear. A study on the relationship between iron storage status and CRC risk conducted in the 1980s reported an increasing tendency of CRC risk in high transferrin saturation, unlike the results of our long-term follow-up study, but the data were not significant and the number of patients with CRC development was small, with only 12 cases [7]. The European Prospective Investigation into Cancer and Nutrition (EPIC)-Heidelberg study analyzed the relationship between iron status and cancer risk in various cancers. There were 256 CRC cases, and there was no difference in HR according to the quartile of serum ferritin levels [14]. However, in contrast to the abovementioned studies, three nested case-con-trol studies reported an inverse association between serum ferritin levels and CRC risk [15][16][17]. In particular, Cross et al. [15] reported that CRC risk was inversely associated with serum ferritin level, serum iron level, and transferrin saturation, and all these markers were significant. On the contrary, although it is not CRC, in the EURGAST study which investigated stomach cancer risk, serum ferritin levels showed the strongest inverse relationship (HR of the fourth quartile to the first quartile 0.38; 95% CI, 0.25 to 0.57) among all markers of systemic iron status [18]. These previous studies were conducted in Western countries, hence epidemiological evidence on the association between ferritin levels and CRC risk in Asians is lacking. To the best of our knowledge, this is the first large-scale nationwide cohort study to investigate this topic in Asians. The results of a meta-analysis of studies comparing CRC patients with normal subjects support our findings, although it was not an epidemiological study; in this report, CRC patients from Eastern countries had significantly lower serum ferritin levels than normal subjects [9]. Two notable factors observed in the subgroup analysis of our study were age and sex. In terms of age, the significance of serum ferritin was prominent in those under 65 years of age but not in those aged 65 years or over. Elderly people are more prone to nutritional deficiency and chronic inflammation. Because these confounding factors can lead to an increase in the level of serum ferritin [19], it seems that the relationship between ferritin levels and CRC risk is more prominent among younger subjects than among elderly individuals. According to the 2011 to 2016 United States statistics, the incidence of CRC decreased by 3.3% per year for those over 65 years of age but increased by 1% per year for those aged 50 to 64 and by 2% per year for those under 50 years of age [2]. Therefore, considering the age-related differences in incidence patterns, active CRC screening is required if younger people have low serum ferritin levels. Second, there was a sex-based difference in the effect of ferritin on CRC risk. In men, the HR of CRC incidence related to ferritin tended to decrease from Q2 to Q4, and there was a clear difference between the HRs of Q4 and Q2. A difference in CRC risk according to ferritin level was also observed in women, but the difference was not significant. The reason seems to be related to iron intake in women, and as shown in Table 2, the serum ferritin level in women is lower than that in men. The results of a dietary survey showed that, in general, many women had insuffi- cient intake of red meat containing heme iron [20]. In addition, many fertile women exhibited negative iron balance because the intake patterns of foods containing ingredients that impede iron absorption were higher than those of men, and there was also blood loss due to menstruation. These factors may cause complexity in analyzing the relationship between serum ferritin levels and CRC risk in women, unlike men. This may explain the sex-based difference shown in our study. This sex-based difference has previously been reported by Ekblom et al. [16] where, similar to our study, the reduction of CRC risk with an increase in serum ferritin levels was significantly observed only in men. Therefore, young men with low serum ferritin levels should be carefully considered in active CRC screening tests. There is strong evidence to suggest that dyslipidemia, hypertension, waist circumference, fasting glucose level, and metabolic syndrome are associated with increased CRC risk [13,[21][22][23][24]. In our study, serum ferritin levels in participants with metabolic syndrome showed a strong inverse correlation. Interestingly, the HR of participants without metabolic syndrome was also less than 0.6 (HR, 0.503; 95% CI, 0.286 to 0.886). This suggests that serum ferritin levels are associated with CRC risk, regardless of the presence or absence of metabolic syndrome. In addition, serum ferritin levels were significantly associated with CRC risk in participants with non-anemia and BMI of < 25 kg/m 2 but not in those with anemia and BMI of ≥ 25 kg/m 2 . Therefore, in view of these considerations, serum ferritin can assist in identifying people who need active CRC screening tests among healthy people who do not have risk factors commonly known to be related to CRC risk such as anemia, obesity, and metabolic syndrome. Epidemiological evidence that excessive iron intake increases CRC risk has been reported historically [4]. Most ingested iron is not absorbed and reaches the colorectum [4]. Adenomatous polyposis coli deletion, which is found in most CRC cases, induces the intracellular accumulation of iron in the colorectal epithelium [25]. This activates and enhances the Wnt pathway, a major oncogenic signaling pathway in CRC [25]. As such, there is epidemiological and biological evidence for the association between iron intake and CRC risk, but in a large cohort study, there was no direct correlation between iron intake and systemic iron status [17]. Moreover, contrary to studies showing that iron increases carcinogenic risk because iron is an important factor in genome protection, there are reports that iron deficiency induces oxidative stress and DNA damage, and this is particularly involved in tumorigenesis in gastrointestinal cancer [26]. Therefore, there is still much to be clarified regarding the mechanism of the inverse association between ferritin levels and CRC risk identified in our study. Aside from CRC pathogenesis in the cellular level of iron and ferritin, this inverse relationship between serum ferritin and CRC risk may be interpreted as a result of undetectable micro-bleeding, which is commonly observed in CRC. A previous study with 9,238 CRC patients reported that serum ferritin levels measured within 180 days of CRC diagnosis were low values [27]. Serum ferritin levels measured around 1 year were normal in most cases [27]. This suggests that factors such as micro-bleeding may affect the change of serum ferritin levels in CRC. However, contrary to this previous result, Kishida et al. [28] reported no difference in serum iron levels in the early-stage CRC compared with those in healthy subjects. Therefore, there is insufficient evidence to explain the causal relationship between serum ferritin and CRC risk, although serum ferritin levels were inversely associated with CRC risk in our study. Our study has some limitations. Ferritin levels can increase in acute inflammatory states [8]. Therefore, serum C-reactive protein (CRP) levels can help differentiate acute infection or inflammatory disease, but we did not determine CRP values in the participants. However, because the KNHANES was conducted on healthy people, participants with acute infection may have been excluded. In addition, comprehensive iron status was not investigated, and periodic follow-up was not performed. All inspections and investigations in our study were carried out at the national administrative level; therefore, academic aspects of the inspection items were not sufficiently detailed. Second, micro-bleeding of undetectable early-stage CRC and consequent iron deficiency may be a confounding factor in our results. An analysis of the time interval from the measurement of serum ferritin level to the diagnosis of CRC may indirectly provide a clue to this question. However, evaluation of the HR according to time intervals from the time of registration to the time of diagnosis of CRC could not be performed because of limitations of the available data. Instead, in a previous cohort study on stomach cancer, where there is a risk of iron loss due to micro-bleeding, the risk of gastric cancer and serum ferritin levels was inversely related, regardless of the time interval [18]. Another limitation of this study is that our main result is not sex-specific. Since there is a difference in serum https://doi.org/10.3904/kjim.2022.007 ferritin levels according to sex, there may be controversy over applying our results equally to both men and women, although this result was adjusted for sex. To compensate for this limitation, we presented the result of subgroup analysis according to sex. Despite these limitations, the present study has some strengths. It is the first large-scale cohort study conducted nationwide in an Asian population, and it is a cohort study with long-term follow-up. Contrary to the general results between excess iron intake and cancer risk, this study showed an inverse relationship between serum ferritin levels and CRC risk. In relation to CRC risk, it can be inferred that systemic iron status affects carcinogenesis differently from that of intraluminal iron. Additionally, the inverse relationship between ferritin levels and CRC risk was more prominent in young individuals and males, and a similar trend was found even in the absence of metabolic syndrome, commonly known to induce CRC risk. Therefore, in the case of healthy young men with low serum ferritin levels, more careful observation regarding the risk of CRC is required. To further clarify the significance of our findings, additional research is needed on how systemic iron status, including ferritin, affects colorectal carcinogenesis through interactions in the human body.
2022-10-25T06:17:04.418Z
2022-10-25T00:00:00.000
{ "year": 2022, "sha1": "175fbe5231d701cf980b6c183ba886e4520a1c41", "oa_license": "CCBYNC", "oa_url": "https://www.kjim.org/upload/kjim-2022-007.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e412cc2d44896de57f33b16d028eb7ee9ad5f1d0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
266226749
pes2o/s2orc
v3-fos-license
Sustained high fatality during TB therapy amid rapid decline in TB mortality at population level: A retrospective cohort and ecological analysis from Shiselweni, Eswatini Objectives: Despite declining TB notifications in Southern Africa, TB-related deaths remain high. We describe patient-and population-level trends in TB-related deaths in Eswatini over a period of 11 years. Methods: Patient-level (retrospective cohort, from 2009 to 2019) and population-level (ecological analysis, 2009 – 2017) predictors and rates of TB-related deaths were analysed in HIV-negative and HIV-coinfected first-line TB treatment cases and the population of the Shiselweni region. Patient-level INTRODUCTION About 2.5 million cases of tuberculosis (TB) and 377,000 TB-related deaths were estimated in Sub-Saharan Africa in 2019 [1].Southern Africa is particularly affected because of the intertwined HIV epidemic, with more than half of all TB treatment cases co-infected with HIV [1].The risk of TB disease is higher in people living with HIV (PLHIV) mainly due to reactivation of latent TB [1], with the highest death rates reported in patients with advanced HIV disease [2][3][4]. Strategies addressing the interlinked HIV and TB epidemics are various [5][6][7][8][9]: reducing the risk of developing TB disease (e.g., TB preventive therapy), infection control, intensified TB case finding (e.g., index case investigation), scale-up of sensitive and closer to the patient diagnostics (e.g., Xpert MTB/RIF assay, urine lateral flow lipoarabinomannan assay [TB-LAM]), and provision of antiretroviral therapy (ART) at time of HIV diagnosis (treat-all).Access to TB care has also improved in remote locations and for vulnerable populations through provision of decentralised and integrated HIV/TB services at routine clinics and community settings [6,[10][11][12]. For instance, these concerted efforts improved the quality and comprehensiveness of TB care, resulting in a decrease in TB notifications and improved treatment outcomes in Eswatini between 2009 and 2016 [13,14].However, many resource-limited settings (RLS) sustained high TB mortality at population level (e.g., >20 deaths per 100,000 population in Southern Africa in 2019) [1], and particular in TB patients co-infected with HIV [1,13,15].Sustained high TB-related deaths could jeopardise achievement of the targets of the End TB strategy aiming at 95% reduction in TB deaths between 2015 and 2030 [16]. Data from Eswatini revealed possible divergent trends concerning TB-related deaths.First, despite an improvement in TB treatment success from 73% in 2012 to 83% in 2017 [17,18], there was a consistently high proportion of deaths reported in patients undergoing treatment, notably among PLHIV (13% in 2016) [18].Second, annual TB incidence declined rapidly from 1350 cases per 100,000 population in 2012 to 363/100,000 population in 2019 [1,19], thus likely contributing to an overall decrease in TB-related deaths in the population.These contrasting patterns between sustained high mortality during treatment and the probable swift decline in TB-related deaths at population-level requires further understanding, especially in resource-poor settings where HIV and TB prevalence is high and both epidemics are intertwined.Thus, we aim to provide a detailed comparison of trends and predictors of TB-related deaths in Eswatini, spanning over an 11-year period, from both individual and population perspectives. Setting The predominantly rural Shiselweni region (203,376 inhabitants in 2017) in Eswatini comprises Nhlangano, Hlathikulu and Matsanjeni health zones [20].The region had a high HIV prevalence of 26% in ≥15-year-olds in 2017 [21].Although regional drug-sensitive and drug-resistant TB notifications declined fivefold from 1341 to 269 cases per 100,000 population between 2009 and 2016, HIV coinfection among TB cases remained high at $71% [14].Figure 1 summarises the main programmatic interventions implemented in the region [14,22,23].HIV and TB care initially centralised at three secondary care facilities in 2008, was-with the support of Médecins sans Frontières (MSF)decentralised to 22 nurse-led and medical doctor-supported primary care clinics between 2009 and 2011.The public sector primarily delivered TB care in the region, while the private sector played a minor role.Laboratory capacity was expanded with the introduction of Xpert MTB/RIF testing in the three health centres in 2009 and TB-LAM testing in all clinics of Nhlangano health zone in 2018.ART eligibility criteria evolved from 2009 onwards, initially targeting HIVcoinfected TB patients and PLHIV who had a CD4 ≤200 cells/mm 3 (Figure 1).Eventually, universal ART (treat-all) was piloted in Nhlangano zone in 2014 and extended to the other health zones 2 years later. Study design, data management and definitions We analysed patient-level (retrospective cohort analysis, from 2009 to 2019) and population-level (ecological analysis, from 2009 to 2017) trends and predictors of TB-related deaths in first-line TB treatment cases in Shiselweni. Patients registered in routine paper-based drug-sensitive TB registers at facilities in the region were considered firstline TB treatment cases.The TB-related variables were defined in accordance with WHO recommendations [24].The date of registration-corresponding to the date of prescription of TB drugs-was considered as time zero (start of observation period).Final TB treatment outcomes were assigned $1 year after registration.Patients dying during treatment were considered a TB-related death.TB treatment cases that were transferred in were excluded from analyses to avoid duplications.These registers were the only data source for this study. For the cohort analysis, we used treatment data from Shiselweni from January 2009 to December 2017, as well as from Nhlangano health zone from 2018 to 2019.Patientlevel treatment data were unavailable for the other two zones for 2018-2019.For the ecological analysis, we considered TB cases from the entire region for the period from 2009 to 2017 only, lacking sex-and age-stratified population-level data for Nhlangano zone thereafter. Assumptions behind the relationship of variables for the cohort analysis were summarised in a directed acyclic graph (Figure S1).We identified factors that may be (in)directly associated with timely TB care registration and death during therapy, which were then included in analysis.All available variables were incorporated into the final fitted models for multiple imputation and Poisson regression. Statistical analysis Analyses were performed with Stata 16.1.Frequencies and proportions describe baseline and crude patient-level and population-level outcome data overall and by HIV status.Annual treatment outcomes were reported in the same year of TB treatment initiation (cohort specific). Multiple imputations by chained equation was used to account for missing baseline values, with 10 imputed datasets created and imputation diagnostics being satisfied (Figures S2 and S3).We also accounted for undocumented deaths.Some treatment outcomes (loss to follow-up [LTFU], treatment failure, outcome not evaluated, transfer out) may contain undocumented deaths, possibly resulting in underestimation of TB-related deaths.Therefore, these outcomes (17.7%) were assumed missing, with multiple imputations used to obtain the final binary outcome of treatment success and death (Table 1).Interactions among variables were not evaluated in the process of multiple imputations.However, interactions of HIV status with other variables were assessed for the cohort and ecological analysis. Cohort analysis (2009-2019) We used multivariable Poisson regression on the imputed datasets to describe associations with death.One model was fitted for the entire cohort, and-as HIV-status interacted with some covariates-separate models were fitted for HIV-negative, positive and missing status. Ecological analysis (2009-2017) Population-level denominators were calculated by combining mid-year sex-and age-stratified 2007 and 2017 housing census population estimates from Shiselweni [25,26]-with an annual negative linear population growth assumed for the years between 2007 and 2017 [26]-with the corresponding regional sex-and age-stratified HIV prevalence estimates [27][28][29].Lacking disaggregated HIV prevalence estimates in older people, we combined people aged ≥60 years into one age category.Then, in each imputed dataset, we divided annual numbers of deaths stratified by age, sex and HIV status (numerator) by the corresponding population denominators and multiplied by 100,000 to obtain stratified mortality rates per 100,000 populations.These stratified mortality rates were averaged across all imputed datasets to obtain mortality rates adjusted for undocumented deaths.The peak year for TB-related deaths was identified as the year with the highest number of deaths or highest mortality rate between 2009 and 2017. Using the same imputed stratified population-level data, multivariable Poisson regression models with robust standard errors were built to describe associations between population-level factors and TB-related deaths.Models were fitted for the entire Shiselweni population, and separately for the HIV-positive and negative populations.Given the potential association between changing rates of TB infection in the population that may result in changes in populationlevel mortality, we used the annual number of treatment cases as a proxy factor for the risk of population-level TB infections. Ethics This analysis was approved by the Eswatini Health and Human Research Review Board.It also fulfilled the exemption criteria set by the MSF Ethics Review Board (ERB) for a posteriori analysis of routinely collected clinical data and thus did not require MSF ERB review.The tests were exclusively available in laboratories at the three secondary care facilities, necessitating primary care clinics to send sputum samples for testing to these facilities a few times per week.**** These are the regional estimates of proportion of people who knew their HIV-positive status among all PLHIV, who received ART among all PLHIV, and who were virally suppressed among all PLHIV.These population-level indicators were obtained from two population-based HIV incidence measurement surveys [21,28,29]. Baseline factors Of 11,883 TB treatment cases, 10,257 (86.7%) presented with pulmonary TB alone.A total of 2798 (23.6%) patients were HIV negative, 8443 (71.1%) were PLHIV, and 642 (5.4%) had an unknown HIV status.PLHIV (vs HIV negative) were more likely in the 20-49 years age group, women, patients presenting at secondary care level, those with extra-pulmonary TB, retreatment cases and those with negative bacteriological status (Table 2).In PLHIV, most initiated TB treatment before universal ART (treat-all) and TB-LAM became programmatically available.Most PLHIV received ART during TB treatment and had CD4 cell counts below 200 cells/mm 3 . Crude TB treatment outcomes Treatment success increased from 65.7% to 81.5% between 2009 and 2019 and was highest in 2018 (84.8%) (Figure 2, Table S1).Treatment success tended to be higher for HIV-negative patients in all years; in 2019 it was 86.4% in HIV-negative patients versus 80.5% in PLHIV.Of 11,883 TB treatment cases, 1302 (11.0%) patients were registered as dead during treatment.By HIV-status, there were 210/2798 (7.5%) deaths in HIV-negative patients, 984/8443 (11.7%) deaths in PLHIV, and 108/642 (16.8%) in patients with unknown HIV status.While the absolute number of TB treatment cases and deaths declined each consecutive year (Table S1), PLHIV sustained higher case fatality ratios (CFRs) in all years (Figure 3).The CFR remained above 10% in most years (other than 2009, 2017 and 2018), with the highest CFR recorded in 2015 (15.3%) and the lowest in 2018 (5.7%).For HIV-negative patients, only the years 2012 and 2013 had reported CFRs at 11.8%-11.2%,while it ranged between 4.5% and 9.2% for the remaining years. Potential undocumented deaths A total of 2109 (17.7%) patients had an outcome recorded that could include undocumented deaths (LTFU, treatment failure, outcome not evaluated, transfer out).For the entire cohort, this was highest in 2009 (24.9%) and decreased to 10.9% in 2019.It was highest in patients with missing HIV status (39.9%), followed by PLHIV (17.5%) and HIV-negative patients (13.3%). Accounting for undocumented deaths using multiple imputation, the overall proportion of deaths increased from 11.0% in the crude dataset to a mean proportion of 13.5% (n = 1610) across the 10 imputed datasets (minimum: n = 1582, 13.3%; maximum: n = 1650, 13.9%). In separate analyses by HIV status (Table 3), for both PLHIV and HIV-negative patients, increasing older age groups (20-49 and ≥50 years) had a higher fatality risk compared with ≤19 years, as did patients with extra-pulmonary TB (vs.pulmonary TB) and missing bacteriological status Note: Multiple imputation by chained equation accounted for missing baseline values in sex (0.1%), age (0.5%), TB classification (0.7%), TB site (0.4%), and CD4 cell count in HIV-positive patients (60.8%).Missing values for bacteriological status and HIVstatus were considered to be missing not at random.We therefore decided to categorise the data of these variables and added a missing data indicator category. According to guidance of the World Health Organisation at that time [57], initiation of empiric TB treatment was encouraged for individuals with HIV coinfection and suspected TB disease, which could lead to instances of missing bacteriological status when access to TB testing was impractical or delayed [14].Additionally, missing HIV status is likely not at random (conditional on measured covariates), as seen in other studies [58,59] with missingness associated with a higher probability of incomplete treatment, treatment failure, and death [60,61]-that is, the missing HIV status values affect the missingness probability.Ten imputations were performed separately by HIV-status, using identical imputation methods for missing baseline and outcome values.The imputation model, utilising all available variables (see Table 2), employed logistic regression for binary variables and predictive mean matching for continuous variables, and did not incorporate interactions. a We assumed that undocumented deaths can occur in patients with the following outcomes as recorded in the TB treatment register: loss to follow-up, treatment failure, outcome not evaluated, transfer out. (vs. bacteriologically confirmed TB).Associations appeared more pronounced for HIV-negative patients.In addition, for HIV-negative patients only, a negative bacteriological TB status increased the fatality risk, and programmatic availability of on-site Xpert possibly did (aRR 1.60, 1.00-2.55).Overall, the annual proportion of patients with missing outcomes, treatment failure (that may reflect drug resistance) and transfer out (TFO) appeared comparable between HIV-positive and HIV-negative patients, while the proportion of death was more pronounced in HIV-positive patients.For patients with missing HIV status, about 40% achieved treatment success and the main other outcomes were loss to follow-up (LTFU), death and missing outcomes.The annual treatment outcomes for patients with missing HIV status are only presented until 2012 due to low absolute numbers thereafter.Moreover, the treatment outcomes for the years 2018 and 2019 exclusively include patients from the Nhlangano health zone, due to lack of TB treatment data from the other two health zones.For PLHIV (Table 3), the fatality risk was higher in TB retreatment cases (aRR 1.38, 1.18-1.61)and possibly for each increase in calendar year (aRR 1.07, 1.00-1.14).The fatality risk was increased for patients without ART during TB therapy (aRR 1.70, 1.47-1.97),while it decreased for consecutive higher CD4 cell count strata, being almost half for CD4 ≥350 cells/mm 3 (aRR 0.54, 0.43-0.67)versus CD4 ≤100.Finally, the programmatic availability of TB-LAM also decreased the fatality risk by almost half (aRR 0.65, 0.35-0.90). In patients with missing HIV status, only older age and TB retreatment were clearly associated with an increased fatality risk (Table S3). The sensitivity analyses, which involved treating calendar year as a categorical variable with and without inclusion of the variable ART eligibility criteria, applying varied age categories, excluding data from the Nhlangano health zone for the years 2018 and 2019, and restricting the analysis to the years 2013-2019 (due to a substantial proportion of missing HIV and CD4 values in earlier years) confirmed the results obtained from the main models. In PLHIV (Table 5), the mortality risk tended to decrease in consecutive years from 2011 onwards, but the decrease was statistically significant only from 2014 (aRR DISCUSSION This study from a high HIV and TB burden setting showed differentiated trends in TB-related deaths.Although overall treatment success increased over time, the CFR remained high and was mainly driven by PLHIV.In contrast, at population level, the absolute number of TB-related deaths and mortality rates declined rapidly over a decade, particularly in PLHIV. Treatment fatality in treatment cases The overall crude annual treatment fatality remained high throughout the years.Despite a decline in CFR in HIV-negative patients from 2014 onwards, the overall high CFR was driven and sustained by PLHIV and comparable to previously reported trends from Eswatini and other RLS [15,30].Similar to other settings [31][32][33][34], older age increased the fatality risk irrespective of HIV status, possibly related to factors increasing biological vulnerability to TB disease and unmeasured competing causes of death (e.g., cardiovascular disease, cancer) in older people.Interestingly, sex did not show an obvious association with case fatality, but estimates may have been distorted by lack of adjustment for other socio-demographic factors or correlations between covariates. Comparable to other studies [30], case fatality was higher in HIV-negative patients and PLHIV who presented with extra-pulmonary TB and missing bacteriological status, while a negative bacteriological status increased the fatality risk in HIV-negative patients only.Possible explanations are (undiagnosed) underlying infectious (e.g., pneumocystis pneumonia in PLHIV) and other chronic lung diseases (e.g., lung cancer, silicosis) associated with high case fatality, delay in diagnosis and treatment (e.g., ruling out other infectious diseases), unrecognised drug-resistant TB disease, and T A B L E 4 Annual TB-related mortality rates per 100,000 population by HIV-status and selected demographic factors, and accounting for undocumented deaths, in Shiselweni, from 2009 to 2017.possibly more disseminated and severe disease in patients with extra-pulmonary TB disease.Importantly, negative bacteriological status-which is more common in immunosuppressed patients and a risk factor for death in advanced HIV disease [30,35]-was not clearly associated with increased case fatality in PLHIV.In our setting, health workers were capacitated to start TB treatment in bacteriologically negative cases irrespective of HIV status, potentially reducing delays in treatment initiation in those at greatest need for treatment.Previous TB therapy was clearly associated with higher fatality risk in HIV co-infected patients while the association with HIV-negative patients was not obvious.In Eswatini, retreatment cases and PLHIV are more likely to present with drug-resistant TB disease [36] that is associated with increased case fatality.Accurate and timely diagnosis of drug-resistant TB was compromised in our setting because of suboptimal access to culture-based drug-resistance testing, and the inability of the Xpert and line probe assays to detect the locally prevalent RpoB I491F rifampicin resistance mutation [36][37][38], thus possibly resulting in misclassification of drug-sensitive and drug-resistant TB disease.These challenges suggest a need for faster and more accurate tests for rifampicin-resistant TB disease as well as more potent first-line TB treatment regimens that could cover undiagnosed drug-resistant TB disease. Similar to other studies [2][3][4], higher CD4 cell counts and ART decreased the fatality risk, possibly due to less severe HIV disease and improved immunity.However, the overall impact of programmatic and policy factors mainly targeted at PLHIV remained inconclusive.While the programmatic availability of Xpert testing and the expansion of ART eligibility criteria lacked strong associations, the availability of TB-LAM testing was associated with a lower fatality risk.Although Xpert and TB-LAM testing may improve diagnosis and result in more timely treatment [39][40][41][42][43], the effect of Xpert on case fatality remains inconclusive, as does the population-level impact of TB-LAM [44][45][46].Patients only benefit from improved diagnostics if these are available and used at sites where treatment decisions are made, health workers correctly perform and interpret them, and administrative delays are minimised (e.g., reporting back of test results).Although health policies promoting earlier access to ART may reduce time periods of increased risk for (severe) TB disease, a patient-level benefit may only be achieved if patients actually initiate ART at the appropriate time and remain virally suppressed.Notably-and as suggested by a 7% increased fatality risk for each consecutive calendar year in PLHIV-other temporal opposing factors may hide possible patient-level benefits of policy and programmatic interventions-for instance, when they are related to unmeasured temporal changes in access to care (e.g., hard to reach and to treat patients are possibly proportionally Note: In sensitivity analyses omitting the covariate TB case decrease, effect estimate trends remained consistent across covariate factors for the HIV-positive population, though a more pronounced association was noted for calendar year compared to the main model.In the HIV-negative population, a similar trend emerged, and with men exhibiting an increased mortality risk (aRR 2.30, 1.78-2.97).Abbreviations: aRR, adjusted risk ratio, CI, confidence interval.overrepresented in recent years).Overall, findings suggest that theoretically valuable programme and policy interventions at population level may not always translate to obvious patient-level health benefits, or that effects of new interventions may be difficult to isolate in observational studies. TB-related mortality at population level The absolute crude number of TB-related deaths was highest in 2010 and decreased each consecutive year thereafter. Although mortality rates remained higher in PLHIV, as also reported elsewhere [2][3][4], the absolute decline was most pronounced in PLHIV and was seen in all age groups and both sexes.This decline coincided with rapidly falling numbers of TB treatment cases in this region [14], suggesting that changes in TB notifications and treatment cases may be an important driver for changes in TB-related mortality at population level.With a lower prevalence of active TB in the population, the absolute number of TB cases resulting in death decreases.This inference is supported by the multivariate analysis, indicating an association between declining TB notifications and a lowered mortality risk.However, this risk reduction may primarily signify a decreased risk of TB infection in the population rather than a risk reduction in mortality alone.Although a spurious decline in TB treatment notifications may have resulted in more undiagnosed TB disease and undocumented deaths, the decline in TB is likely real in our setting due to higher ART coverage in PLHIV as well as increased case finding activities and better TB diagnostics in recent years, in addition to a similar decline reported from other parts of the country and settings in Southern Africa [13,[47][48][49].Future decline in TB-related mortality may be achieved by further reducing active TB disease-for instance, by reducing the pool of people with increased susceptibility to TB infection (e.g., reducing HIV infections, early ART) and decreasing the risk of activation of latent TB infection with TB preventive therapy. Overall, population-level multivariable analysis confirmed crude trends with a rapid mortality risk reduction in PLHIV but less obvious mortality risk reduction in the HIVnegative population when compared with 2009.The crude decline in HIV-negative cases may have been too small to detect an obvious temporal trend in adjusted analysis.In the HIV-negative population, the elderly population had the highest mortality risk, surprisingly followed by the 20-29 years group (vs.30-39 years).The latter may be a spurious finding or explained by unaccounted HIV infections.For instance, mortality risk in PLHIV was also high in young adults (e.g., 15-34 years) who were known to have high levels of undiagnosed, untreated and virally elevated HIV infections [21,28]. Notably, the relative mortality risk was comparable between sexes.It may be explained by complex differentiated temporal trends in sex-related and HIV-associated TB factors with regards to epidemiology, behaviour and exposure, biological and genetic determinants, and access to care [50][51][52][53][54], possibly balancing crude differences between sexes. Limitations and strengths First, we accessed first-line TB treatment data, and were thus unable to account for undiagnosed TB or pre-treatment mortality (38% in some African settings [53]), and deaths related to drug-resistant TB therapy.Our estimates of TBrelated mortality rates were therefore at the lower range (27/100,000 in 2017) compared with WHO estimates (55/100,000 in 2017) [55].In contrast, all deaths during TB treatment were assumed to be related to TB disease, which may overestimate mortality rates if some deaths were caused by other conditions.Second, although, population-adjusted estimates of TB mortality depended on the accuracy of disaggregated population-level denominators, overall trends in mortality rates appeared reasonable, with higher rates and risks of death in PLHIV, older people, and the mid-year age group for PLHIV. Third, our dataset contained missing values.Under the assumption that values were missing at random (MAR), multiple imputation was used to impute missing baseline values as well as treatment outcomes that could clinically not be assigned to the binary outcome of death and treatment success.Multiple imputation allows all observations to be retained, thus increasing power and precision [56].As patients lost to follow-up and transferred during TB treatment have a risk of death [32], the omission of non-binary outcomes would have disregarded undocumented deaths and likely resulted in an underestimation of TB-related deaths.Importantly, the assumption of MAR may have been violated and TB-related deaths underestimated if, for instance, LTFU and treatment failure have a disproportionally higher risk of undocumented deaths, or if missing categories have a meaning (e.g., sicker patients are less likely to have measurements).Omitting these outcomes from analysis might have resulted in an underestimation of TB-related deaths.The absence of contextualised information about reasons for and the extent of deaths in LTFU prevented informed assumptions and adjustments in multiple imputation.Alternative correction methods, such as confirming undocumented deaths with the National Population Register, were not feasible due to limited access.As a result, our imputation model might have contributed to underestimating TB-related deaths.Finally, despite a high proportion of missing values for CD4 cell count, we retained it in analyses due to the absence of alternative immune-suppression proxy variables.Avoiding additional assumptions about missingness, we primarily attributed it to inadequate documentation, given the availability of pointof-care CD4 testing in clinics since the early years.Anticipated associations with deaths were consistent with findings in comparable studies [2][3][4]. Fourth, we could not adjust for some unmeasured factors that were identified in the directed acyclic graph (DAG) to be associated with TB treatment initiation and the outcome, thus possibly resulting in biased effect estimates.They included communicable (e.g., lower respiratory infections due to pathogens other than TB) and non-communicable (e.g., diabetes mellitus) comorbidities, disease severity and socio-economic determinants (e.g., employment status, income) as well as factors influencing HIV and TB treatment (e.g., delayed TB diagnosis and treatment, viral load suppression in HIV coinfected patients, other concurrent treatments). A strength was that that the study was conducted in a routine public sector setting, generalizable to many rural contexts in Southern Africa with an intertwined HIV/TB epidemic.In addition, the long observation period allowed us to explore temporal trends amid the rapid expansion of TB and HIV care. CONCLUSIONS TB-related CFR remained high in this high HIV/TB burden setting, specifically among PLHIV who also had more risk factors associated with death during treatment.In contrast, population estimates indicated a rapid decline in TB-related mortality, mainly driven by a decrease in TB treatment notifications in PLHIV.Continued concerted efforts are required to further reduce active TB disease in high HIV/TB burden settings. 1 Overview and timelines of main TB and HIV programmatic changes in the Shiselweni region from 2009 to 2019.ART, antiretroviral therapy; CD4, CD4 cell count in cells/mm 3 ; LAM, urine lateral flow lipoarabinomannan assay.*Only data from Nhlangano health zone were available for the years from 2018 to 2019.**The treat-all approach was piloted in Nhlangano health zone from 2014 onwards, while it became available in the other health zones from 2016 onwards.Treat-all entails that every individual diagnosed with HIV becomes eligible for antiretroviral therapy (ART) immediately upon HIV diagnosis, without consideration of immunological criteria, such as CD4 levels and WHO staging.*** Patients with missing HIV-statusF I G U R E 2 Annual treatment outcomes in absolute numbers and proportions (relative contributions) for the entire TB treatment cohort and disaggregated by HIV status in Shiselweni, 2009 to 2019.The absolute number of patients with recorded TB treatment outcomes decreased in consecutive years and irrespective of HIV status.The annual proportion of patients with treatment success increased between 2009 and 2019, with a greater increase in HIV-negative than HIV-positive patients. casesF I G U R E 3 Annual trends in TB-related deaths in absolute numbers and case fatality ratios (CFRs) by HIV status.CFR, case fatality rate; n, number.TB-related deaths and CFRs in 2018 and 2019 are estimates for Nhlangano health zone only due to unavailability of TB treatment data for the other two health zones.The 2018 CFRs of the entire cohort and for HIV-positive and HIV-negative patients are graphically overlayed.CFR for patients with missing HIV status is only presented until 2012 due to low annual cases thereafter.T A B L E 3 Univariate and multivariate analyses of factors associated with TB-related treatment case fatality by HIV status in Shiselweni, 2009-2019. by age-group in patients (d) Deaths by age-group in HIV-negative patients F I G U R E 4 Annual trends in crude and per 100,000 population-adjusted TB-related mortality rates by HIV status and demographic factors in Shiselweni, from 2009 to 2017. a Analyses were performed on 10 imputed datasets to account for missing baseline values and undocumented deaths.b This is a proxy variable for the risk of TB infection in the population. Missing baseline values and outcomes with possible undocumented deaths that were accounted for with multiple imputation by chained equation. T A B L E 1 Baseline characteristics (number and percentage) of TB treatment cases by HIV status in Shiselweni, from 2009 to 2019.defined as CD4 cell count based eligibility thresholds for ART initiation for all people living with HIV who do not present with TB disease.All patients presenting with TB disease were eligible for lifelong ART initiation. T A B L E 2 a ART eligibility criteria bIn the absence of ART initiation dates, the variable ART during treatment encompasses patients who had already started ART before TB treatment and those who commenced it during therapy. Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/tmi.13961by Test, Wiley Online Library on [18/12/2023].See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions)on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License Annual mortality rates per 100,000 population a a The peak years of mortality rate are in bold.b Data not presented as peak year was 2009.TROPICAL MEDICINE & INTERNATIONAL HEALTH Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/tmi.13961by Test, Wiley Online Library on [18/12/2023].See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions)on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License T A B L E 5 Multivariate analysis of population-level predictors of TB-related mortality by HIV status in Shiselweni, 2009 to 2017.
2023-12-16T12:45:58.210Z
2023-12-15T00:00:00.000
{ "year": 2023, "sha1": "4456e2274429675603d395afe47fdaeb2f1bb071", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/tmi.13961", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "e673a528642eafd858c85e0b57841741ccc21533", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
250589547
pes2o/s2orc
v3-fos-license
Association between Endodontic Infection, Its Treatment and Systemic Health: A Narrative Review The ‘Focal Infection Era in Dentistry’ in the late 19th and early 20th century resulted in widespread implementation of tooth extraction and limited the progress of endodontics. The theory proposed that bacteria and toxins entrapped in dentinal tubules could disseminate systemically to remote body parts, resulting in many types of degenerative systemic diseases. This theory was eventually refuted due to anecdotal evidence. However, lately there has been increased interest in investigating whether endodontic disease could have an impact on general health. There are reviews that have previously been carried out on this subject, but as new data have emerged since then, this review aims to appraise the available literature investigating the dynamic associations between apical periodontitis, endodontic treatment, and systemic health. The available evidence regarding focal infection theory, bacteraemia and inflammatory markers was appraised. The review also collated the available research arguing the associations of apical periodontitis with cardiovascular diseases, diabetes mellitus, adverse pregnancy outcome and autoimmune disorders, along with the effect of statins and immunomodulators on apical periodontitis prevalence and endodontic treatment prognosis. There is emerging evidence that bacteraemia and low-grade systemic inflammation associated with apical periodontitis may negatively impact systemic health, e.g., development of cardiovascular diseases, adverse pregnancy outcomes, and diabetic metabolic dyscontrol. However, there is limited information supporting the effect of diabetes mellitus or autoimmune disorders on the prevalence and prognosis post endodontic treatment. Furthermore, convincing evidence supports that successful root canal treatment has a beneficial impact on systemic health by reducing the inflammatory burden, thereby dismissing the misconceptions of focal infection theory. Although compelling evidence regarding the association between apical periodontitis and systemic health is present, further high-quality research is required to support and establish the benefits of endodontic treatment on systemic health. Apical Periodontitis Aetiology Endodontic infection is a polymicrobial infection, and the diversity of the endodontic microbiome and its host interactions presents not only a unique challenge to treatment, but also a potential risk for systemic disease in other parts of the body [1]. Chronic apical periodontitis is a dynamic sequel to root canal infection. It is driven by persistent localized inflammation within the periapical tissue that can lead to progressive bone resorption and the formation of periapical lesions. If this is left untreated, it can form a sinus tract and lead to cyst formation [2,3]. Apical periodontitis involves activation of both the innate and adaptive immune systems characterized by the recruitment of various types of cells and inflammatory mediators, which eventually leads to the destruction of periapical tissue and the formation of periapical lesions [4] (Figure 1). Therefore, apical periodontitis is the consequence of a complex interplay between microbiota of root canal system, microbial virulent factors and the host immune response [4]. of periapical tissue and the formation of periapical lesions [4] (Figure 1). Therefore, apical periodontitis is the consequence of a complex interplay between microbiota of root canal system, microbial virulent factors and the host immune response [4]. Apical Periodontitis-A Global Burden Apical periodontitis poses a significant global burden. The NHS in England and Wales have reported that over 1 million teeth received RCT between 2001 and 2004, costing the NHS around GBP 50.5 million [5]. According to the American Association of Endodontists, more than 15 million root canal treatments are performed each year [6]. In Europe, it is reported that almost 23 million endodontic treatments are undertaken yearly [7]. Furthermore, a systematic review reported a global high prevalence of apical periodontitis (5% of all teeth, one periapical lesion per patient) and root canal treatment (10% of all teeth, two root canal treatments per patient) in an adult population [8]. These numbers highlight that root canal treatment is considered one of the most common oral diseases, which could increase the burden on systemic health as well as increase the burden of costs on health services globally. Focal Infection Focal infection is defined as a localized or generalized infection caused by the systemic spread of bacteria or their products from distant foci of infection [9]. A focus of infection is "a confined area that is chronically infected with pathogenic microorganisms" [10]; it may be clinically asymptomatic and can occur anywhere in the body. In medical and dental literature, the teeth and oral tissues, tonsils, adenoids, etc., have all been cited as putative foci of infection [11,12]. The height of focal infection theory's popularity was during the late 19th and early 20th century and was known as the 'Focal Infection Era in Dentistry'. During this focal infection era, rheumatoid arthritis was closely associated with dental health. This resulted in widespread implementation of removal of teeth, Apical Periodontitis-A Global Burden Apical periodontitis poses a significant global burden. The NHS in England and Wales have reported that over 1 million teeth received RCT between 2001 and 2004, costing the NHS around GBP 50.5 million [5]. According to the American Association of Endodontists, more than 15 million root canal treatments are performed each year [6]. In Europe, it is reported that almost 23 million endodontic treatments are undertaken yearly [7]. Furthermore, a systematic review reported a global high prevalence of apical periodontitis (5% of all teeth, one periapical lesion per patient) and root canal treatment (10% of all teeth, two root canal treatments per patient) in an adult population [8]. These numbers highlight that root canal treatment is considered one of the most common oral diseases, which could increase the burden on systemic health as well as increase the burden of costs on health services globally. Focal Infection Focal infection is defined as a localized or generalized infection caused by the systemic spread of bacteria or their products from distant foci of infection [9]. A focus of infection is "a confined area that is chronically infected with pathogenic microorganisms" [10]; it may be clinically asymptomatic and can occur anywhere in the body. In medical and dental literature, the teeth and oral tissues, tonsils, adenoids, etc., have all been cited as putative foci of infection [11,12]. The height of focal infection theory's popularity was during the late 19th and early 20th century and was known as the 'Focal Infection Era in Dentistry'. During this focal infection era, rheumatoid arthritis was closely associated with dental health. This resulted in widespread implementation of removal of teeth, adenoids, tonsils, and other organs for many decades, in an attempt to cure many unexplained illnesses which were allegedly caused by focal infection. In 1891, Miller proposed that oral microorganisms and/or their by-products can spread to distinct body parts, drawing attention towards the relationship between oral and systemic disease [13]. Although Miller's claims were not based on scientific grounds and were mostly from unsubstantiated anecdotal evidence and case reports, William Hunter proposed that oral microorganisms and their toxic by-products can spread from a focus of infection and cause a range of systemic conditions [14,15]. In 1925, Western Price advocated tooth extraction as the treatment of choice, believing that toxins and bacterial components produced by residual bacteria entrapped in dentinal tubules act as antigens (substances that are foreign to the host), and these antigens may travel through the bloodstream and lymphatic system to remote body parts and play an etiological role in causing many types of degenerative systemic diseases. However, Easlick (1952) pointed out flaws in Price's methodologies and refuted any associations between endodontically treated teeth and systemic disease [9]. This subject laid dormant for decades due to lack of direct cause and effect evidence until Newman [16], again, brought this subject into attention. Since then, various studies have attempted to investigate whether endodontic disease, as a localized oral infection, could have an impact on the host immune response compromising the general health of individuals. Endodontic Disease and Systemic Impact Recently, there was a shift again in endodontics, from a discipline of pain management, infection control and tooth preservation toward oral infections as risks factors for systemic complications. The impact of apical periodontitis extends beyond its dental implications, e.g., tooth extraction (60-80% cases) [5]. There has been resurgence of the "Focal infection Theory" and this correlation between focal infection in the oral cavity and systemic diseases has again provoked global attention [17]. It can affect a patient's health in terms of both the pathogenic effects of polymicrobial communities and the host immune responses [2]. Endodontic disease can result in translocation of microbes from the root canal into the systemic environment, triggering immune responses that can affect other tissues/organs. Studies have linked apical periodontitis with systemic diseases including diabetes [18], hypertension [19,20], adverse pregnancy outcomes [21], skeletal infections, and coronary heart disease (CHD) [22][23][24][25][26][27][28][29][30][31][32][33][34][35], the most common type of cardiovascular disease (CVD). This is due to increased risk of bacteraemia [36][37][38][39], translocation of soluble microbial compounds, active inflammatory mediators and haemostatic factors from the root canal into the systemic environment [40][41][42], resulting in metastatic infection, injury and inflammation, triggering low-grade systemic inflammation affecting other tissues and organs ( Figure 2). Clinically, apical periodontitis can present completely asymptomatic and be detected as an accidental finding on an intraoral radiograph as a periapical radiolucency with no obvious signs and symptoms such as pain, swelling, abscess and sinus tract. So, not only symptomatic cases but also asymptomatic cases that remain unnoticed for years may have an adverse effect on a patient's general health. Therefore, endodontic disease poses a major global health burden. Similarities between Periodontal and Endodontic Disease Impacting Systemic Health There is strong evidence in the literature correlating periodontal infections with increased risk of cardiovascular disease development [43][44][45][46][47]. A cross-sectional analysis of a large-scale study with a cohort of 60,174 individuals after screening all patients' records of 15 years concluded that there is an independent association of periodontitis with atherosclerotic cardiovascular diseases [48]. In a nationwide retrospective study, Byon et al. (2020) found that periodontitis can increase the risk of atherosclerotic cardiovascular disease, and its prevention may help in reducing the risk of cardiovascular disease [44]. Furthermore, the Consensus Report based on four papers [49][50][51][52] in a joint workshop organised by European Federation of Periodontology and American Academy of Similarities between Periodontal and Endodontic Disease Impacting Systemic Health There is strong evidence in the literature correlating periodontal infections with increased risk of cardiovascular disease development [43][44][45][46][47]. A cross-sectional analysis of a large-scale study with a cohort of 60, 174 individuals after screening all patients' records of 15 years concluded that there is an independent association of periodontitis with atherosclerotic cardiovascular diseases [48]. In a nationwide retrospective study, Byon et al. (2020) found that periodontitis can increase the risk of atherosclerotic cardiovascular disease, and its prevention may help in reducing the risk of cardiovascular disease [44]. Furthermore, the Consensus Report based on four papers [49][50][51][52] in a joint workshop organised by Euro-pean Federation of Periodontology and American Academy of Periodontology in 2013 [53] concluded that there was strong and consistent epidemiological evidence that periodontitis results in increased risk of future atherosclerotic cardiovascular disease. This impact is biologically caused due to translocated circulating oral microorganisms, directly or indirectly inducing systemic inflammation resulting in the development of atherothrombogenesis [53]. Recently, a jointly organized workshop by the European Federation of Periodontology and the World Heart Federation in 2020 [54] concluded a latest consensus report that there was strong evidence that periodontitis patients exhibit significant prevalence of subclinical cardiovascular disease, heart failures and higher cardiovascular mortality (due to coronary heart disease and cerebrovascular disease). Other than cardiovascular diseases, studies have also linked periodontitis with type 2 diabetes mellitus, Parkinson disease, chronic obstructive pulmonary diseases, pneumonia, adverse pregnancy outcomes, osteoporosis, kidney disease, and most recently, the severity of COVID-19 [55][56][57][58][59][60][61][62][63][64]. It has also been found that periodontal disease and oral frailty in the elderly, and its interplay with oral microbiota, have a role in the diagnosis of different neurodegenerative diseases including Alzheimer's disease [65][66][67] While periodontal and endodontic disease have differences in their pathogenicity, they are both chronic infections and share common pathogens, inflammatory mediators [18] and biological pathways, thus linking these with systemic health [68]. Along with gingivitis and periodontitis, root canal infections also pose an increased risk of bacteraemia [36][37][38][39]. The anatomic proximity of these infections with the bloodstream can result in bacteraemia during treatment [39]. Moreover, in contrast to periodontal infections, no epithelial barrier is found between the necrotic infected root canal and highly vascular granulomatous tissue in periapical infections. In these lesions, areas of considerable bone resorption act as a "reservoir" of inflammatory biomarkers, including TNF alpha, IL6, IL-1β, PGE-2, and IL-8 [69,70]. Thus, endodontic disease is an enfolded primary infective focus for dissemination via periapical vasculature into the systemic circulation of either microbes, which can invade endothelial cells and promote a vascular inflammatory state, or the microbial by-products and localised inflammatory mediators that might trigger the immune response affecting other tissues and organs [36][37][38][39][40][41][42]71]. Attempts have been made to evaluate the effect of apical periodontitis on the development of systemic conditions including cardiovascular diseases (CVDs) [33,72,73], diabetes [34,35] and adverse pregnancy outcomes [74]. Endodontic Bacteraemia Earlier studies have shown bacteraemia after root canal treatment [75,76]. These studies concluded that the possibility of bacteraemia increases when root canal instrumentation is performed beyond the root apex compared to when confined within the root canal system. showed that bacteraemia following root canal treatment is transient and lasts for up to 10 min after instrumentation, as the circulating microbes are cleared by the patient's immune system [75]. Baumgartner et al. (1976) also showed that bacteraemia did not occur if root canal instrumentation was confined within the root canal [77]. Using a culture-based approach, studies demonstrated bacteraemia in around 3-20% cases after non-surgical root canal treatment [77,78]. However, most of the earlier studies had limitations with regard to the sensitivity of the blood culture techniques that they used. Debelian et al. (1992Debelian et al. ( , 1995 published research work highlighting that bacteraemia is not only associated with overinstrumentation of the root canal beyond the apex but also even when instrumentation was maintained within the root canal system [39,79]. Furthermore, Debelian et al. (1995) using biochemical tests, and antibiograms established that the microorganisms isolated from the blood had the root canal as their source [39]. In subsequent studies using electrophoresis, DNA hybridization, and phenotypic and genetic methods, they further confirmed the endodontic origin of bacteraemia microorganisms [37,80,81]. Due to the higher sensitivity of the identification techniques employed in these studies, far greater bacteraemia (31% to 54%) was detected after root canal treatment than reported in the past. Savarrio et al. (2005) also confirmed these results and identified bacteraemia by conventional culturing approach in 30% of the cases [38]. They also showed using pulsed field gel electrophoresis that microbes identified from blood and the root canal were genetically similar. Since more than half of the bacteria are unculturable, the relatively lower detection rate in the earlier studies after root canal treatment can be attributed to the use of a culture-based approach. Reis et al. (2016), using a molecular approach (qPCR), detected bacteraemia after non-surgical root canal therapy in all cases that were detected negative for bacteraemia with a culture approach [71] (Table 1). • Detected bacteraemia after non-surgical root canal therapy in all cases that were detected negative for bacteraemia with a culture approach Therefore, the incidence of bacteraemia is much higher than those reported in previous studies using a culture technique. The dissemination of microorganisms into the blood stream is common and can occur less than 1 min after an oral procedure. Microorganisms from the infected site may reach the lungs, heart, and peripheral blood capillary system [42,82] and contribute to the development of CVDs. Another well-known life-threating condition that can occur due to bacteraemia, especially in high-risk patients, is infective endocarditis. It is an infection of the heart lining, a heart valve or a blood vessel affecting 3.6 in 1,000,000 individuals per year. The patient can suffer from fever, heart murmurs, myocardial abscess, valvular incompetence, or mycotic aneurysm along with impacts on the central nervous system including stroke, transient ischemic attack, subarachnoid haemorrhage, brain abscess and toxic encephalopathy [83][84][85]. Therefore, bacteraemia associated with endodontic infections and treatment can have an adverse impact on general health. Interventional studies have shown significant differences in levels of inflammatory markers including CRP, C3 and ADMA between baseline and follow up [90,[101][102][103][104]. In a longitudinal interventional study, Bakhsh et al. (2022) found that the pre-operative serum levels of IL-1β, hs-CRP, FGF-23, and ADMA were significantly higher in patients with apical periodontitis than healthy controls. This indicated the increased systemic burden associated with apical periodontitis. Furthermore, one year post treatment, the levels of these markers were generally reduced, indicating the positive effect of surgical and non-surgical root canal retreatment on the levels of these markers [105]. The reduction in these biomarkers after treatment seems to confirm the effectiveness of the available therapeutic approaches to endodontic treatment in suppressing systemic inflammation. This highlights the pathway of future research towards investigating the diagnostic potential of these biomarkers that can be used along with the current objective criteria (clinical and radiological) to assess endodontic success and also as a prognostic marker of systemic response to endodontic treatment. Apical Periodontitis and Cardiovascular Diseases "Cardiovascular diseases" (CVDs) is an umbrella term for conditions affecting the heart and blood vessels including coronary heart disease, cerebrovascular disease, stroke, hypertensive heart disease, cardiomyopathies and myositis, rheumatic heart disease, atrial fibrillation and flutter, congenital heart disease, valvular heart disease, peripheral artery disease, deep vein thrombosis, thromboembolic disease, and transient ischemic attack [106,107]. CVDs are a global health and economic burden as these are the leading cause of death worldwide, responsible for about 30% of total global mortality [108]. Furthermore, it is expected that the incidence of CVDs will increase by approximately 10% over the next 20 years, resulting in a threefold increase in healthcare cost [109]. Studies have suggested a relationship between endodontic infection and coronary heart diseases, the most common type of cardiovascular disease-49% of total CVD burden [22][23][24]106]. For example, in a hospital records-based study, An G.K. et al. (2016) [110] found that patients with apical periodontitis are 5.3-fold more likely to suffer from CVDs than those without apical periodontitis. The association was also evident in the study carried out by Virtanen et al. (2017) [111]. However, both studies included smoker patients, which is also a risk factor for CVDs. In the past decade, the association between apical periodontitis and CVDs has been widely investigated. Since elevated inflammatory biomarker levels can induce a systemic inflammatory response, it may increase the risk of cardiovascular events [112][113][114]. The inflammatory response may also be associated with endothelial dysfunction [100,115], endothelial cell activation, and atherosclerosis [116]. Furthermore, studies have also investigated the impact of CVD on endodontic treatment outcome. A systematic review and meta-analysis of longitudinal cohort studies reported that patients with CVD have 67% risk of a negative endodontic outcome [117]. Since both CVD and endodontic disease can lead to inflammatory bioburden, disrupting the homeostasis and causing further impairment of the immune response may negatively impact endodontic treatment outcome. Studies have shown a similarity between specific apical periodontitis inflammatory markers and the ones involved in atherosclerosis. In a systematic review and meta-analysis analysing the effect of apical periodontitis on levels of inflammatory mediators, Georgiou et al. (2019) found that apical periodontitis can increase the levels of CRP, IL-6, ADMA, and complement-C3 levels; however, the authors suggested the need for further well-controlled longitudinal studies [78]. Several systematic reviews and meta-analyses have been carried out to demonstrate associations between elevated levels of biomarkers in patients with apical periodontitis and the development and progression of CVDs [27,29,31,72,118]. Recently, Jakovljevic et al. (2020) [73], in an umbrella review, revealed that based on moderate to critically low-quality available evidence, the association between apical periodontitis and CVDs is weak, and the authors highlighted the need for future, well-designed, longitudinal clinical studies to strengthen the evidence to confirm a potential association. Atherosclerosis The major mechanism in the pathogenesis of coronary heart disease and cerebrovascular disease, which are the most frequent CVDs, is the development of atherosclerosis [119,120]. This is an inflammatory process that involves the formation of atherosclerotic plaque, affecting the tunica intima, tunica media, and tunica adventitia layers of largeand medium-calibre arteries, including the coronary artery [121,122]. These plaques are the accumulation of lipids and connective tissue along with inflammatory, endothelial, and smooth muscle cells [120]. Inflammation is regarded as the principal factor for the atherosclerotic plaque's initiation, progression, and rupture, leading to thrombosis and its systemic complications, including myocardial infarction and stroke [123,124]. Endothelial dysfunction is caused by low-grade chronic inflammation triggered by pathogenic factors such as microorganisms or CVD risk factors, including high levels of lowdensity lipoproteins (LDL), hypertension, hyperlipidaemia, smoking-induced toxins, free radicals, shear stress, and/or a combination of these factors [125]. Endothelial dysfunction results in increased endothelial permeability, which allows migration of cholesterol-filled LDL into the vessel wall. The LDL particles then become oxidised and stimulate the release of phospholipids. As a result, an inflammatory response is elicited, and monocytes are attracted to the lesion, which becomes macrophages. These macrophages then engulf the oxidised LDL and transform into foam cells, which are precipitated into the vessel wall, resulting in the formation of fatty streaks [124,126]. There is upregulation of the vascular soluble adhesion molecules including ICAM-1, sVCAM-1 and E-selectin. These facilitate transmigration of monocytes and T-lymphocytes into the intima layer, resulting in the secretion of the pro-inflammatory cytokines including IL-1β, IL-6 and TNF-α. IL-6 triggers the release of C-reactive proteins (CRP) from hepatocytes. The release of these cytokines and growth factors eventually results in the formation of atheroma, which is a necrotic core composed of macrophages, lipid-laden cells, mast cells, T-cells and degenerative material covered by a thin fibrous cap [121]. As the process persists, the fibrous capsule is thinned, leading to plaque destabilisation and thrombus formation, which can block coronary, cerebral or peripheral blood vessels, resulting in myocardial infarction, stroke or peripheral arterial disease [120,121] (Figure 3). There are several potential pathways by which chronic apical periodontitis could affect the development and progression of atherosclerosis. Firstly, endodontic microorganisms could directly seed in the arterial wall through bacteraemia, triggering a local inflammatory response including adaptive immune responses, inducing cellular alterations, which eventually results in the development of atherosclerotic plaques [127,128]. Secondly, damping of the endodontic bacterial by-products or local inflammatory mediators in the systemic circulation can lead to endothelial dysfunction and progression of the atherosclerotic in-flammatory process [33]. Several studies have shown both bacteria and biomarkers of oral origin in atherothrombotic plaques or vascular biopsies [86,129,130]. Therefore, the presence of bacteraemia and the low-grade systemic inflammation associated with chronic apical periodontitis may contribute to the development of CVDs [87,131]. biomarkers of oral origin in atherothrombotic plaques or vascular biopsies [86,129,130]. Therefore, the presence of bacteraemia and the low-grade systemic inflammation associated with chronic apical periodontitis may contribute to the development of CVDs [87,131]. C-Reactive Protein (CRP) CRP belongs to the pentraxin family [132]. The hepatocytes synthesise it in response to IL-6 [133]. CRP is considered a non-specific systemic inflammatory biomarker and is widely used to monitor infections and inflammatory conditions [134]. CRP, by enhancing inflammation, oxidative stress, and coagulation, is involved in various steps leading to vascular events [135]. CRP can activate complement C3, upregulate vascular adhesion molecules, trigger proinflammatory cytokines (IL-1 and TNF-α), recruit monocytes into the arterial wall, and cause superoxide, myeloperoxidase, and matrix metalloproteinases elevation. It can damage endothelial vasoreactivity and facilitate low-density lipoprotein uptake by endothelial macrophages to form foam cells [124,135,136]. Several investigations have associated elevated levels of CRP with future cardiovascular events, including acute myocardial infarction (MI), stroke, and peripheral artery disease [137]. Indeed, a hs-CRP has been suggested as a screening biomarker to evaluate coronary heart disease risk [138,139]. It is also strongly correlated with several cardiovascular risk factors, including diabetes, obesity, hypertension, and lipids [140,141]. However, CRP has its limitations as it is a non-specific marker and its levels can dramatically increase in cases of infection and tissue damage [142]. In a recent study, it was found that poor oral health, periodontal disease and tooth loss were associated with higher levels of CRP, which may be an indicator of the C-Reactive Protein (CRP) CRP belongs to the pentraxin family [132]. The hepatocytes synthesise it in response to IL-6 [133]. CRP is considered a non-specific systemic inflammatory biomarker and is widely used to monitor infections and inflammatory conditions [134]. CRP, by enhancing inflammation, oxidative stress, and coagulation, is involved in various steps leading to vascular events [135]. CRP can activate complement C3, upregulate vascular adhesion molecules, trigger proinflammatory cytokines (IL-1 and TNF-α), recruit monocytes into the arterial wall, and cause superoxide, myeloperoxidase, and matrix metalloproteinases elevation. It can damage endothelial vasoreactivity and facilitate low-density lipoprotein uptake by endothelial macrophages to form foam cells [124,135,136]. Several investigations have associated elevated levels of CRP with future cardiovascular events, including acute myocardial infarction (MI), stroke, and peripheral artery disease [137]. Indeed, a hs-CRP has been suggested as a screening biomarker to evaluate coronary heart disease risk [138,139]. It is also strongly correlated with several cardiovascular risk factors, including diabetes, obesity, hypertension, and lipids [140,141]. However, CRP has its limitations as it is a non-specific marker and its levels can dramatically increase in cases of infection and tissue damage [142]. In a recent study, it was found that poor oral health, periodontal disease and tooth loss were associated with higher levels of CRP, which may be an indicator of the contribution of periodontal disease to chronic systemic inflammation, and can also be a contributor towards the progression of atherosclerosis and thrombus formation [143]. A previous histological study reported increased IL-6 and CRP messenger RNA levels in periodontal ligament tissue of teeth with apical periodontitis [144]. Enhanced CRP synthesis, in response to IL-6 in apical periodontitis, can act as a potential reservoir of IL-6 and CRP for sustaining a low systemic inflammatory response [144][145][146], thus increasing the risk for atherosclerotic cardiovascular disease. Furthermore, Vidal et al. (2016) showed that apical periodontitis was associated with higher CRP levels in plasma of hypertensive patients [147]. Garrido et al. (2019) also reported higher serum levels of hs-CRP in individuals with apical periodontitis when compared to healthy controls [96]. Sirin et al. (2019) also showed a positive correlation of increased serum hs-CRP levels with increasing severity of apical periodontitis [89]. On the other hand, the impact of root canal treatment on systemic levels of hs-CRP was tested in a study conducted by Poornima et al. (2020). The study results demonstrated that root canal treatment has a positive impact in reducing the levels of hs-CRP in systemically healthy patients with apical periodontitis [104]. However, when investigating a larger sample size, Bakhsh et al. found that surgical and non-surgical root canal retreatment initially increased the serum levels of hs-CRP within 3 to 6 months after treatment but the levels declined at the one year review [105]. Pentraxin-3 (PTX-3) PTX-3 is a member of the long pentraxin family. It is expressed at sites of inflammation by several cells including stromal (endothelial cells, fibroblasts), myeloid cells (monocytes/macrophages), polymorphonuclear neutrophils in a response to the primary proinflammatory stimuli (IL-1β, TNF-α), bacterial LPS, flagellin, outer membrane protein, and ischaemia [148,149]. Studies have shown that increased levels of PTX3 increase the risk of cardiovascular diseases [149][150][151]. Pentraxin 3 is also involved in atherosclerosis by interacting with many ligands and acts as a modulatory molecule of the complement system, inflammatory response, angiogenesis, and tissue remodelling [152]. It has been found that levels of PTX3 are useful in indicating local inflammation at atherosclerotic lesions more accurately than CRP. This marker was investigated for the first time in apical periodontitis patients by Bakhsh et al. (2022). The study showed that serum levels of PTX-3 significantly reduced at one year after surgical and non-surgical root canal retreatment [105]. This indicates that system inflammatory burden of PTX-3 can be raised in patients with apical periodontitis, whereas endodontic treatment has a positive effect of on PTX-3 serum inflammatory levels. Asymmetric Dimethylarginine (ADMA) ADMA is an analogue of L-arginine that occurs naturally in plasma. It is an endogenous inhibitor of nitric oxide (NO) synthase, which catalyses the production of nitric oxide. NO modulates vascular tone, endothelial function and has a biological effect, especially in the cardiovascular system [153][154][155]. Therefore, the increased ADMA levels by inhibiting nitric oxide synthase and NO would result in endothelial dysfunction associated with atherosclerosis [156,157]. Inflammatory stimuli can result in increased ADMA levels, which subsequently increases the risk of coronary heart disease [158,159]. In a clinical study conducted to assess whether patients with apical periodontitis were at risk of developing an atherosclerotic lesion, Cotti et al. (2011) found that patients with apical periodontitis had significantly higher levels of ADMA, and significant reduction in endothelial flow reserve when compared to controls [100]. Additionally, Georgiou et al. (2019) found in their systematic review that apical periodontitis increases the systemic levels of ADMA when compared to controls [87]. Bakhsh et al. (2022) also found that the pre-operative serum levels of ADMA were significantly higher in patients with apical periodontitis compared to the controls. ADMA serum levels were reduced at one year post endodontic treatment; however, the increased ADMA levels at baseline caused a significant reduction in the proportion of successful outcomes [105]. Fibroblast Growth Factor-23 (FGF-23) FGF-23 is a hormone produced by osteocytes and osteoblasts that increases the activity of the kidneys to metabolise phosphate and vitamin D. Any inflammatory bone alteration indirectly impacts the production of FGF-23 [160,161]. Furthermore, several studies have found that FGF-23 is also regulated by LPS, IL-1β and TNF-α [162]. Higher levels of FGF-23 were found to have an impact on the kidney and the heart. In the kidney, high levels of FGF-23 would cause an increase in sodium absorption and renin-angiotensin activation, which would subsequently lead to hypertension. Moreover, heart and blood vessels are affected by high levels of FGF-23, which could lead to subclinical atherosclerosis, cardiovascular events, left ventricular hypertrophy, and death [163,164]. Bakhsh et al. (2022) investigated serum FGF-23 levels in apical periodontitis patients and found significantly higher levels at the baseline compared to control. FGF-23 levels at the baseline were also positively correlated to the preoperative size of the periapical radiolucency. Interestingly, the levels of this marker reduced at every subsequent review appointment with significant reduction at 1 year post surgical and non-surgical root canal retreatment [105]. This highlights the FGF-23 system inflammatory burden caused by apical periodontitis and the positive effect of endodontic treatment on its reduction. Matrix Metalloproteinases (MMPs) Matrix metalloproteinases are enzymes that are involved in the physiological and pathophysiological processes of tissue repair and remodelling. They are stimulated by pro-inflammatory cytokines (IL-1β and TNF-α) and maintain a persistent inflammatory process in the periapical region when released. MMP-1, MMP-2, MMP-3, MMP-8, MMP-9, and MMP-13 have been shown to be present in periapical lesions from humans [165][166][167][168][169][170][171][172][173]. Furthermore, MMPs play a significant role in several pathological diseases including atherosclerosis and early development of hypertension [174,175]. Increased proteolytic activity of MMPs results in atherosclerotic plaque ruptures leading to cardiovascular events [176][177][178][179]. MMP-2 secreted by fibroblasts in primary endodontic infections aids in the periapical inflammation and tissue destruction [180]. MMP-8 is a neutrophil collagenase and, during inflammation, degrades collagen types I, II, and III. It is activated by autolytic cleavage, and its upregulation was found in inflamed pulp and periradicular lesions. Pattamapun et al. (2017) found both MMP-2 and MMP-8 in root canal exudates and their levels gradually decreased upon root canal treatment, suggesting that MMPs play a role in the healing of periapical lesions [181]. Human Complement C3 Complement C3 is a protein complex of the innate immune system. Both intrinsic and extrinsic stimuli play a role in the activation of C3. This results in recruitment of phagocytes and target cell lysis [182]. Acute-phase reactant C3 fragment is linked with several systemic conditions, including metabolic syndrome, diabetes mellitus, smoking, and atherosclerotic CVD [182,183]. In addition, it has been observed that increased serum levels of C3 are associated with increased risk of CVDs [182,183]. Studies have demonstrated a reduction in the levels of C3 after endodontic treatment, thus confirming the effectiveness of endodontic treatment in suppressing systemic inflammation. Kettering and Torabinejad (1984), looking at the effect of dental abscess on the levels of C3, found that serum levels of C3 were higher in patients with acute apical conditions when compared to controls. These levels were reduced following root canal treatment and extraction [101]. Furthermore, Márion et al. (1988) investigated C3 levels in patients with chronic periapical granuloma and found similar results following periapical surgery [102]. More recently, in a systematic review and meta-analysis, Georgiou et al. (2019) found that the presence of apical periodontitis contributed to the elevated levels of C3. Root canal treatment resulted in reduced levels of C3, which could help reduce the risk of CVDs [87]. Statins and Apical Periodontitis Elevated triglycerides and LDL and low high-density lipoprotein (HDL) levels are known risk factors for the development of atherosclerosis and CVDs. Several studies found a positive association between periodontitis and increased triglyceride levels [184,185]. Statins are a group of medicines that can help lower the levels of LDL cholesterol in the blood and are administered in patients with hypercholesterolaemia with associated increased risk of atherosclerosis and heart diseases, including coronary heart disease and risk of cardiac infarction [186]. This medication has pleiotropic effects such as increased osteoblastic differentiation [187][188][189], promotion of viability and proliferation of osteoblasts [190,191] and improvement of mineralization [192][193][194][195]. Statins also inhibit osteoclastogenesis through their effect on the RANKL-induced nuclear factor kappa β (NF-κβ) activation pathway [196]. In periodontitis patients taking statins, a conjoint benefit was revealed with scaling and root planning [197]. Statins' effect on apical periodontitis healing has also been investigated. In an animal study, Lin et al. (2009) found that the introduction of simvastatin before the induction of periapical lesion significantly reduced bone resorption when compared to the control group [198]. This is due to the anti-inflammatory and immunomodulatory effect of statin by decreasing CD-68-positive macrophages and the protection of osteoblast [198]. In another animal study, Pereira et al. (2016) also showed that the use of simvastatin decreased the progression of increasing periapical ligament space in apical periodontitis-induced rats [199]. Alghofaily et al. (2018) tested the effect of long-term statin intake on the healing of apical periodontitis and found that there was a significant association between long-term statin intake and healing of apical periodontitis after non-surgical root canal treatment [196]. Although these studies provide some evidence of the positive effect of statins on the healing of apical periodontitis, further investigations are required to establish this fact. Apical Periodontitis and Diabetes Mellitus Diabetes mellitus (DM) is a complex multisystem metabolic syndrome characterised by abnormalities in carbohydrate, protein and lipid metabolism due to either profound or an absolute insulin deficiency caused by pancreatic β-cell dysfunction (type 1) and/or insulin resistance in liver and muscle (type 2) [200]. DM can affect the immune system of the individual by upregulation of pro-inflammatory cytokines from monocytes and polymorphonuclear neutrophils along with downregulation of growth factors from macrophages, which predisposes them to chronic inflammation, progressive degradation of tissues and diminished tissue repair capacity ( Figure 4) [201]. Diabetes can eventually lead to dysfunction of several organs such as the kidneys, nerves, eyes, blood vessels and the heart. It has been reported that diabetes is associated with increased morbidity and mortality [202,203]. DM is a global health burden; in 2019, DM was affecting around 463 million adults. It is expected that these figures could reach around 700 million by year 2045 [204]. Chronic systemic inflammation in DM causes an alteration and elevation in the serum levels of proinflammatory markers TNF-α, IL-1α, IL-1β, CRP and IL-6 [205,206], which can have a negative impact on periapical healing [207]. Systemically, DM inhibits collagen formation and alters the degeneration of matrix proteins and tissue remodelling, which leads to poor wound healing [208]. Garber et al. (2009) showed poor wound healing with direct pulp capping using mineral trioxide aggregate (MTA) in diabetic rats. The results also showed lower dentin bridge formation and elevated pulpal inflammation [209]. There is strong evidence from the literature that patients with DM have higher prevalence of apical periodontitis, greater periapical lesion size and greater incidence of periapical infections as compared with patients who do not have diabetes [210][211][212][213][214]. In a retrospective study, Segura-Egea et al. (2005) showed a higher prevalence of untreated periapical lesions and unsuccessful endodontic treatment in patients with DM [212]. There was a trend toward increased symptomatic periradicular disease in patients with diabetes who received insulin, as well as flare ups in all patients with diabetes [210][211][212][213]. Chronic systemic inflammation in DM causes an alteration and elevation in the serum levels of proinflammatory markers TNF-α, IL-1α, IL-1β, CRP and IL-6 [205,206], which can have a negative impact on periapical healing [207]. Systemically, DM inhibits collagen formation and alters the degeneration of matrix proteins and tissue remodelling, which leads to poor wound healing [208]. Garber et al. (2009) showed poor wound healing with direct pulp capping using mineral trioxide aggregate (MTA) in diabetic rats. The results also showed lower dentin bridge formation and elevated pulpal inflammation [209]. There is strong evidence from the literature that patients with DM have higher prevalence of apical periodontitis, greater periapical lesion size and greater incidence of periapical infections as compared with patients who do not have diabetes [210][211][212][213][214]. In a retrospective study, Segura-Egea et al. (2005) showed a higher prevalence of untreated periapical lesions and unsuccessful endodontic treatment in patients with DM [212]. There was a trend toward increased symptomatic periradicular disease in patients with diabetes who received insulin, as well as flare ups in all patients with diabetes [210][211][212][213]. On the other hand, the results of some studies suggest that chronic periapical disease correlates with higher HbA1C levels and contributes to diabetic metabolic dyscontrol [26,215]. The inflammatory periapical response is enhanced in diabetics, leading to a rise in blood glucose with intensification of diabetes, requiring an increase in insulin dosage or therapeutic adjustment [216]. Yip et al. (2021) provided evidence linking DM and the level of glycemia to the increased prevalence of apical periodontitis. The study also implied that statins and metformin use may be protective in this relationship as they were associated with lower prevalence of apical periodontitis [214]. On the other hand, the results of some studies suggest that chronic periapical disease correlates with higher HbA1C levels and contributes to diabetic metabolic dyscontrol [26,215]. The inflammatory periapical response is enhanced in diabetics, leading to a rise in blood glucose with intensification of diabetes, requiring an increase in insulin dosage or therapeutic adjustment [216]. Yip et al. (2021) provided evidence linking DM and the level of glycemia to the increased prevalence of apical periodontitis. The study also implied that statins and metformin use may be protective in this relationship as they were associated with lower prevalence of apical periodontitis [214]. Furthermore, the available scientific evidence strongly suggests that DM has a negative impact on the outcome of endodontic treatment in terms of periapical healing due to delay or arrest of periapical repair. Ng et al. (2011) found that DM is one of the prognostic factors for the survival of root-filled teeth [217]. There is a decrease in the success of endodontic treatment in cases with preoperative periradicular lesions in patients with DM. The prognosis for root-filled teeth is worse in diabetics, showing a higher rate of root canal treatment failure with increased prevalence of persistent chronic apical periodontitis [210][211][212][213]. Therefore, diabetes contributes to decreased retention of root-filled teeth and is a significant risk factor for tooth extraction after non-surgical root canal treatment [20,218]. Since diabetes is the third most prevalent chronic medical condition in patients seeking dental treatment [216], dentists should be aware of the possible relationship between diabetes and endodontic infections. Diabetic patients, especially those with poor glycaemic control, should be informed about the evidence of poor outcome of endodontic treatment with increased risk of failure associated with diabetes. This should be part of informed consent and also care planning should include liaising with the patient's physician. Apical Periodontitis and Pregnancy Periodontal diseases have been shown to burden pregnant patients due to systemic inflammatory stress [219]. Studies indicated that Prostaglandin E2 (PGE-2) and TNF-α from inflamed periodontal tissues in pregnant women can reach the placenta and amniotic fluid, contributing to preterm birth [219][220][221][222]. Recently, the association between apical periodontitis and adverse pregnancy outcomes has also been investigated. Studies showed that the presence of a periapical lesion in postpartum women was associated as a risk factor for shorter pregnancy duration, intrauterine growth restriction and preterm birth [223,224]. Khalighinejad et al. (2017) found that maternal apical periodontitis may be a strong independent predictor of preeclampsia [225]-the most common adverse pregnancy outcome characterized by hypertension and proteinuria after the 20th week of gestation [226]-and is among the leading causes of maternal mortality [227]. In a recent systemic review, Jakovljevic et al. 2021 critically evaluated the available evidence on the association of maternal apical periodontitis with adverse pregnancy outcomes. The authors concluded that based on 'Fair' and 'Good' quality available evidence, a positive association was observed between maternal apical periodontitis and adverse pregnancy outcomes [74]. Therefore, it could be suggested that the risk of preeclampsia and low birth-weight preterm birth may be reduced through timely diagnosis and treatment of any source of inflammation, including apical periodontitis, before pregnancy. Apical Periodontitis and Autoimmune Disorder Autoimmune disorders are a group of conditions that share a self-reactive immune response involving different inflammatory mediators [228]. Inflammatory Bowel Diseases (IBD) including Ulcerative Colitis and Crohn's Disease [229], along with Rheumatoid Arthritis (RA) and Psoriasis (Ps) are examples of autoimmune disorders. Studies have shown a higher prevalence of apical periodontitis in some autoimmune disorders such as IBD and RA [230][231][232]. Recently, Ideo et al. (2022) showed similar findings where patients affected by autoimmune diseases (RA, Ps and IBD) had a higher prevalence of apical periodontitis compared to the controls [233]. This may be attributed to the role of excessive production of common inflammatory cytokines such as TNF-α, IL-1, IL-6, IL-23 and IL-17 in the development, progression and persistence of both conditions [2,[234][235][236]. Furthermore, the RANKL osteoprotegerin (OPG) pathway is involved in the progression of RA as well as apical periodontitis [235]. Immune system status plays an essential role in the development and progression of apical periodontitis. The medications used for the treatment of these autoimmune disorder modify the immune response and include conventional Disease-Modifying Anti-Rheumatic Drugs (cDAMRDs) [237][238][239] and biologic Disease-Modifying Anti-Rheumatic Drugs (bDMARDs) [240][241][242]. bDMARDs block targets' activity in the inflammatory process including cytokines (TNF-α, IL-6, and IL-1); RANKL-induced nuclear factor kappa β activation pathway; and T or B cell receptors [243,244]. Piras et al. (2017) found that the frequency of teeth with apical periodontitis was significantly higher in patients with autoimmune disorders receiving bDMARDs [230]. In a recent study, Ideo et al. (2022) showed similar results where patients with autoimmune diseases taking biologic medications had a higher prevalence of apical periodontitis [233]. Cotti et al. ( , 2018 showed that endodontic treatment of teeth with apical periodontitis in patients taking biologic medications resulted in faster healing than among controls, thereby suggesting that immune-modifying treatment may influence the healing of apical periodontitis after treatment [245,246]. Therefore, patients with autoimmune disorder due to altered immune response and influence of immune modulatory therapy can have an impact on the prevalence of apical periodontitis and prognosis after endodontic treatment. Apical Periodontitis and Other Systemic Conditions Although it is not yet confirmed, some researchers have tried to correlate the presence of apical periodontitis with different systemic conditions including liver diseases and haemophilia [247,248]. In a cross-sectional study, Castellanos-Cosano et al. investigated the frequency of apical periodontitis among patients undergoing liver transplant assessment and found that 79% of the study participants had one or more apical periodontitis when compared to healthy controls [247]. Furthermore, the same group of authors investigated the prevalence of apical periodontitis in patients with inherited haemophilia and found that an apical radiolucency was present in almost 68% of patients with haemophilia [248]. The findings of these investigations highly suggest that apical periodontitis is found in several systemic diseases which mandate the frequent dental follow-up and reinforcement of oral hygiene regime in medically compromised patients, not only to improve and maintain their oral health but also to decrease the systemic burden of oral disease in these patients. Conclusions There is emerging evidence that bacteraemia and low-grade systemic inflammation associated with chronic apical periodontitis may contribute negatively to systemic health such as the development of CVDs, adverse pregnancy outcomes, and diabetic metabolic dyscontrol. Although the evidence is limited, it supports that patients who have conditions such as DM or autoimmune disorders have an impact not only on the prevalence of apical periodontitis but also on the prognosis after endodontic treatment. Statins used may be protective in this relationship by having a positive effect on apical periodontitis healing. Furthermore, the convincing evidence supports that successful root canal treatment has a beneficial impact on systemic health by reducing the inflammatory burden, thereby dismissing the misconceptions rooting back to research performed 70-80 years ago about the relationship between endodontic treatment and focal infection, which resulted in arguments in favour of tooth extraction. Further high-quality research is required to strengthen this available evidence showing the benefits of endodontic treatment on systemic health. Conflicts of Interest: The authors declare no conflict of interest.
2022-07-17T15:11:20.033Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "fbbb14151339a177729546df365a0a3046518d75", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1648-9144/58/7/931/pdf?version=1657784922", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ae2d64cc1489d143406f227581a9ddddff9db99a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
210652361
pes2o/s2orc
v3-fos-license
THE DESCRIPTION AND THE IMPACT OF PLAYING ONLINE GAME ON PHARMACY STUDENTS IN SAMARINDA Online game is a game that can be accessed by the community using internet access. Online games are known to have a fun and addictive effect on their use. Indonesia is the 16 most online game user in the world. This study aims to provide an overview and the impact of the habit of playing online games on Pharmacy students in Samarinda. The method used in this study was using questionnaires distributed to at least 57 respondents of Pharmacy students in Samarinda. The results show that at least 80% of students have played online games with an average playing time of 1-2 hours in one day. The results of the questionnaire also showed an impact in the form of decreasing students' sensitivity to the surrounding environment and the tendency to speak harshly due to playing online games. Based on the results, it can be concluded that the habit of playing online games can reduce students' sensitivity to their environment and can encourage negative behavior, especially in verbal matters. This research can be used as a preliminary description of behavior changes towards the negative of students due to the habit of playing online games. INTRODUCTION This millennial era the advancement of information technology is growing rapidly. One example of technological progress in this era is the internet. Various information can be accessed freely through the internet. Not only information, but entertainment facilities also presented by the internet, for example online games. The first online game that appeared was war simulation games or planes that were used for military purposes which were finally released and then commercialized, these games then inspired other games to emerge and develop. Aside from being a means of entertainment, online games teach something new because of the frequency of frequent play, then someone will imitate the scene in the online game. Playing online games can have a positive impact, among others, improving the motor system, for example teenagers who play online games can improve play strategy and language skills 1 . In addition to providing a positive impact, online games also have a negative impact especially for gamers, that is, it will be easy to forget the priority scale in their daily activities, for example, causing laziness and addictions. Addiction to online games can trigger destructive actions that harm other people, such as when money runs out to rent a computer or buy a data quota so that they commit acts of theft 1 . Addiction is a form of behavior that is driven by a high sense of dependence on things he likes so that someone can be said to be addicted if doing the same activities repeatedly can even be more than five times 2 . Individuals who are addicted to online games in a week can spend as much as 30 hours or the average online game addict can spend ± 20-25 hours a week, so that in a day they can play for more than 5 hours 3 . An example of the phenomenon of cases concerning online game addiction that occurred in Indonesia precisely in Banyumas, throughout 2018 ten children in Banyumas were diagnosed with mental disorders due to addiction to playing online games. The Soul Specialist Doctor of Banyumas Hospital, Hilma Paramita, said that the average patient cannot control himself when playing online games. As a result, they are no longer able to move normally 4 . Based on the explanation above, this research is intended to provide an overview and the impact of playing online games on Pharmacy students in Samarinda. Pharmacy students are chosen on the grounds that the pharmacy major has a fairly high lecture load with more lecture time, practicum and assignments compared to other majors. The high level of lectures will give an idea of whether online games are also played by students with small free time. This research is then expected to be the initial information for parents and teaching staff to be able to find solutions to overcome the habit of playing online games for students. MATERIAL AND METHOD General Procedure This research is a quantitative descriptive study with the main objective to explain the existing phenomena by using numbers to rely on individual or group characteristics. This study assesses the nature of the conditions that appear. The purpose of this research is limited to describing the characteristics of something as it is 5 . This research was conducted by distributing questionnaires using google form to all students of S1 Pharmacy Study Program at the University of Muhammadiyah East Kalimantan (UMKT). Sampling is done by random sampling method for all UMKT pharmacy students. A total of 57 respondents filled out the google form which contained several questions related to the habit of playing online games. The results obtained were then analyzed and grouped and calculated. The questionnaire items used in this study were closed questionnaire items with two until four answer options , where the questions listed were adjusted by the researcher. The alternative answers provided depend on the selection of the researcher and according to the responses of the students who fill. The closed research questionnaire has effective principles when viewed from the researcher's perspective so that the respondent's answers can be adjusted to their needs. Overview of research respondents This research was conducted on second semester students in UMKT Bachelor of Pharmacy study programs using google form. A total of 57 respondents filled out questionnaires. Based on the results obtained, it is known that the distribution of respondents based on sex was obtained as much as 42% of respondents were male and 58% of respondents were women. About 98% of respondents knew about online gaming, and there were only 2% of respondents who did not know about online games on the grounds that these respondents had never played online games. Based on the data obtained, knowledge about online games is not influenced by gender, this is in accordance with Indonesian gamers' statistical data which states that about 56% of users of online games are male and 44% of them are women 6 . From these results, it is illustrated that online games can indeed be enjoyed by all sexes without dominating only one gender. Figure 1. Description of research respondents on UMKT pharmacy students based on sex (A) and knowledge about online games (B). Based on the results of the questionnaire also stated that there were only 2% of respondents who did not know about online games, while the other 98% knew about online gaming. This description of knowledge is in line with the rapid use of the internet by young people. The ease of access and the speed of receiving information seeking through the internet makes student knowledge about online games high. Internet users in Indonesia are estimated at 54% of indonesian community in 2018, and this figure grew about 143,7milion from last survey on 2017 7 . Overview of the reasons for driving an online game To find out the description of the drivers of UMKT pharmacy students in playing online games, questionnaires were conducted regarding the reasons that were thought to trigger online gaming. Based on the results obtained, the reasons for triggering UMKT pharmacy students to play online games are quite varied and the percentage is not so different. These reasons include boring, hobbies, entertainment and stress with each percentage in a row 28%, 21%, 19% and 14% respectively. Conducted a study to find out the reason students play online games, based on the results mentioned the main reason students play online games is to eliminate boredom from the learning process at school 7 . But due to the high frequency in playing causes addiction to play. The results of this study also show the tendency of the reason for playing online games is due to boredom, entertainment and stress due to the high lecture process 8 . Figure 2. Percentage of motivating reasons for playing online games in pharmacy students at Universitas Muhammadiyah Kalimantan Timur Overview of the impact and attitude when playing online games Basically, both conventional and online games can have a positive or negative impact. This depends on the frequency and duration of someone playing online games. Playing online games can have a positive impact, among others, improving the motor system, for example teenagers who play online games can improve play strategy and language skills 1 . Summarizes at least four domains that accept the positive side of playing online games, namely cognitive (eg, attention), motivational (eg, resilience in the face of failure), emotional (eg, mood management). ), and social (eg, prosocial behavior) 8 . In addition to providing a positive impact, online games also have a negative impact especially for gamers, that is, it will be easy to forget the priority scale in their daily activities, for example, causing laziness and addictions. At least 97% of children in the United States use at least 1-2 hours in 1 day to play online games 8 . The results of this study also show similar things, at least 65% of UMKT pharmacy students use at least 1-2 hours of time playing online games, and 11% use at least 3-8 hours to play online games in 1 day ( figure 3). Addiction is a form of behavior that is driven by a high sense of dependence on what he likes. A person can be said to be addicted if doing the same activity repeatedly can even be more than five times 2 . Individuals who are addicted to online games in a week can spend as much as 30 hours or the average online game addict can spend ± 20-25 hours a week, so that in a day they can play for more than 5 hours 1 . Addiction to online games can lead to destructive actions that harm other people, such as when money runs out to rent a computer or buy a data quota, thus committing acts of theft 1 . Although in the study showed 65% of respondents spent only 1-2 hours playing online games in a day, 11% of respondents used at least 3-8 hours to play online games in 1 day. The high percentage in playing online games can still cause the possibility of negative effects from playing online. These negative effects include violence, addiction and even depression 9,10,11 . The tendency to play online games generally can also influence the attitudes of children in their daily lives. This attitude is both in the form of responses to the environment and attitude when playing online games. This study then conducted a survey to describe the attitude of UMKT pharmacy students while playing online games. For students' attitudes towards the environment when they play online games shows 54% of respondents still give a quick response if asked for help, 23% delay, 11% refuse, even 12% of them feel disturbed. As for attitudes when playing online games, it was found that 39% of respondents did not say rude, 35% sometimes said rude, while 26% were used to saying rude (table 1). Tabel 1. An overview of the attitudes of pharmacy students while playing online games. Percentage of types of responses given by pharmacy students when requested for assistance when playing online games An assessment is made of the respondent's response while playing online games. This assessment was conducted to see the trend of responses from respondents, these trends are negative or positive. By looking at this, the picture of the impact of playing online games will be more visible. The results of the assessment found that 54% of respondents gave positive responses and 46% of respondents gave negative responses when asked for help while playing online games. While the respondent's attitude when playing online games was 39% of respondents did not say rude and positive, while 61% were negative. The negative scoring on the respondent's attitude on this study contain of "sometimes speak harsly" and "speak harsly". This two answers labelled as negative scoring because this two show the negative attitude of the respondent. CONCLUSION The results show a high percentage of the negative behavior tendency of respondents when playing online games, this trend is even up to more than 50%. Based on the results obtained, it can be concluded that playing online games can encourage someone to behave negatively, both towards the surrounding environment and be negative when playing the online game.
2019-10-19T04:19:40.049Z
2019-09-11T00:00:00.000
{ "year": 2019, "sha1": "56f297681290757e252a103ac73abeaf70ce7ca6", "oa_license": null, "oa_url": "http://journals.umkt.ac.id/index.php/jik/article/download/255/137", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b70c8875e82afa2dbcc064838c4c702e5f69ae6f", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
118687356
pes2o/s2orc
v3-fos-license
Emergence of bound states in ballistic magnetotransport of graphene antidots An experimental method for detection of bound states around an antidot formed from a hole in a graphene sheet is proposed by measuring the ballistic two terminal conductances. In particularly, we consider the effect of bound states formed by magnetic field on the two terminal conductance and show that one can observe Breit-Wigner like resonances in the conductance as a function of the Fermi level close to the energies of the bound states. In addition, we develop a new numerical method in which the computational effort is proportional to the linear dimensions, instead of the area of the scattering region beeing typical for the existing numerical recursive Green's function method. I. INTRODUCTION After the isolation of single-layer graphene flakes for the first time 1 , fabrication methods of graphene devices have undergone great development. It turned out that the intrinsic charge traps in standard thermally grown SiO 2 substrates along with the residuals from the device preparation processes place a major limitation on the mean free path. Novel fabrication procedures like suspending the graphene region 2,3 or using BN substrates 4 enabled the realization of high quality ballistic graphene devices above the micrometer scale. 5,6 Using basic device geometries, several ballistic phenomena were recently demonstrated, where the two-dimensionality and the Dirac spectrum of graphene play important roles: Due to Klein tunneling sharp Fabry-Pérot interference was observed in p − n junctions, 7,8 conductance quantization was seen in narrow suspended graphene constrictions, 9 and evidence of ballistic trajectories was observed in BNbased 10,11 and suspended devices. 12 The high quality of these novel devices is well demonstrated by observation of the quantum Hall effect (QHE) even at magnetic fields below 100 mT owing to the weak impurity potential. 9,13 These fabrication techniques open new routes to experimental studies of various device geometries in the ballistic regime, thus making the theoretical investigations of such systems also essential. Recent theoretical studies of periodic graphene antidot lattices (GALs) in the ballistic regime has predicted several possible experimental applications. [14][15][16][17] GALs are expected to be suitable for preparing resonant tunneling diodes 14 or graphene waveguides by a regular graphene strip surrounded by a GAL. 15 In this paper we argue that coherent ballistic transport can be used to study the properties of the bound states (BS's) in graphene nanostructures. To this end, we studied the conductance of a graphene antidot attached to two metallic contacts as shown in Fig. 1. Between the contacts propagating edge states ξ i (i = 1, 2) are formed by applying a homogeneous magnetic field perpendicular to the plane of the graphene sheet. Moreover, due to the magnetic field the system also exhibits BSs (labeled by H in Fig. 1) localized to the hole. We argue that resonant peaks can be observed in the conductance at energies close to the BS's even for small couplings between the edge states and the BSs. Moreover, such resonant peaks can be described by the well-known Breit-Wigner formula. 18 Similar anti resonance shapes were obtained in the conductance curves of graphene nanoribbons in the presence of vacancies that can be considered as a special case of antidots. 19 Later on we estimate the coupling terms Γ i (i = 1, 2) which depend on the strength of the scattering impurities and on the decay length (DL) of the edge states. The edge states are localized to the edges and vanish exponentially with the distance measured from the edge on a length scale given by the DL. In general the DL is governed by the magnetic length l B = /|eB|. In the case of zigzag edges and an energy range close to the Dirac point, however, the edge states penetrate into the bulk on a length scale larger than l B due to the charge accumulation over the edge. 20 The rest of the paper is organized as follows. In Sec. II we present an effective model predicting resonant peaks in the conductance as a function of the Fermi level. In Sec. III we present our numerical results obtained by tight-binding (TB) calculations and compare them to the predictions of the effective model introduced in Sec. II. The details of our numerical approach are presented in Sec. IV. Finally we summarize our work in Sec. V. II. BREIT-WIGNER-LIKE RESONANCES IN THE CONDUCTANCE In order to see the behavior of the conductance of a graphene nanostructure containing an antidot as shown in Fig. 1 we develop an effective model and show that the conductance as a function of the Fermi level exhibits Breit-Wigner like resonances 18 close to the BSs. The effective Hamiltonian of the antidot in a narrow energy window around a given BS of energy E BS in the basis (ξ 1 , BS, ξ 2 ) T can be written as where g i is the Green's function of the edge state ξ i . For simplicity, we assume that the g i 's are scalars depending weakly on the energy and are identical for the two edge states: g 1 = g 2 = g. To calculate the transmission probability between the edge states ξ 1 and ξ 2 we first eliminate the degree of freedom related to the BS using the decimation method. 21 Then the reduced effective Hamiltonian becomes The transmission probability T 2→1 between the edge states is determined by the G where is the resonant energy and stands for the width of the resonance. As we can see from Eq. (3a), the transmission probability T 2→1 (E) exhibits a Breit-Wigner resonance close to the E BS energy of the BS. Consequently, the conductance C(E) ∼ [1 − T 2→1 (E)] between the metallic contacts also manifests resonances at the same energies as the transmission probability T 2→1 (E). The resonances, however, are shifted from E BS by the self-energy given in Eq. (3b). In the low-density limit the Γ i coupling strengths can be approximated by first-order perturbation as Γ i ≈ ξ i |V |BS , where the scattering potential of the impurities is V (r) = ν(r)ρ(r) [with the density ρ(r) and the strength of the scattering impurities ν(r)]. Edge states localized due to the magnetic field decay as , where x is the distance measured from the edge. Here the decay factor can be extracted from the asymptotic expansion of the wave function describing electronic states in a magnetic field as given in Ref. 22. In the case of a zigzag edge used in our numerical calculations, the charge accumulation over the edge 20 for energies close to the charge neutrality point increases the DL of the edge states. If the distribution of the scattering impurities is homogeneous [ρ(r) ≡ ρ, ν(r) ≡ ν], the coupling strength becomes proportional to where D labels the hole-edge distance. Note that the leading contribution to the Γ i 's is related to impurities located at x ∼ D/2. As one can see, the coupling strengths Γ i are sensitive to the strength of the magnetic field and to the hole-edge distance. At finite temperature the width of the resonance also depends on the temperature broadening k B T . Consequently, since ∆ 0 vanishes exponentially for D/l B O(1) the width of the resonance will be governed by the temperature broadening rather than by ∆ 0 calculated at the zero-temperature limit. III. NUMERICAL RESULTS To explore the behavior of the conductance of our graphene antidot nanostructure we calculated the conductance between two electrodes at zero temperature as a function of the Fermi level for the graphene antidot using the tight-binding method 23 and interpreted the results using the effective model introduced in the previous section. The calculations were performed on graphene ribbons with zigzag edges and only the nearest-neighbor hopping γ was taken into account. The details of our numerical approach including the improvement of the usual recursive Green's function method are discussed in Sec. IV. Our main results are summarized in Fig. 2. The calculated conductance corresponding to a geometry where the radius of the hole is small compared to the width of the ribbon is shown in Fig. 2(a), while the opposite limit is shown in Fig. 2 According to Eq. (3a) the expected resonances are most pronounced if Γ 1 ≈ Γ 2 . To this end our calculations were performed on systems exhibiting an approximately symmetrical arrangement of the scattering centers around the hole that was located at the center of the strip. The hole-edge distance and the magnetic field were chosen to meet the D/l B O(1) condition in order to obtain well-separated resonances. As Fig. 2 shows one can indeed observe resonant peaks in the conductance at energies close to the BSs of the hole denoted by red vertical lines in the subplots of Fig. 2. The energy eigenvalues of the BSs are calculated within a TB framework discussed later on in this section. Our numerical results indicate that the qualitative features of the conductance are not sensitive to the R/W ratio (where R is the radius of the hole and W is the width of the strip). In the calculations we consider only energies below the first Landau level ω c = 2|eB| v 2 F (with the Fermi velocity v F ) where there is only one propagating channel per edge. The conductance calculated at higher energies (not shown in the figures) manifests a complex interference pattern, and hence we restrict our attention to the energy range 0 < E < ω c . We study the energy dependence of the conductance considering two ranges. First we examine the energy range where the DL of the edge states is smaller than the hole-edge distance D, while in the second energy range the DL becomes the larger length scale. (i) For energies where the DL of the edge states is smaller than D, the coupling between the BSs and edge states can be increased by the presence of scattering impurities. According to our numerical results atomic vacancies are the most efficient scattering centers (see the blue circles in the left panels of Fig. 2). In the discussed energy range the conductance equals to one conductance unit σ 0 = 2e 2 /h (with the factor of 2 standing for the spin degeneracy) corresponding to one open channel in the ribbon [see E 0.2 ω c in Fig. 2(a) and E 0.35 ω c in Fig. 2(b)]. The scattering centers affect the conductance only close to the energies corresponding to the BSs of the hole; otherwise the edge states do not backscatter to each other. The model described in Sec. II explains the shift between the resonances and the BSs. However, at energies ∼ (0.5, 0.6, 0.85) ω c in Fig. 2(a) and ∼ (0.5, 0.7, 0.75, 0.85, 0.9) ω c in Fig. 2(b) one can see more than one resonance around each of the BSs which cannot be explained by this simple model. In numerically exact calculations the couplings Γ i as well as the Green's functions g i are not scalars and might have stronger energy dependency hence the structure of the resonances becomes more complex as well. (ii) At energies where the DL of the ξ i edge states is larger than D (due to charge accumulation at the edge 20 ) the backscattering of the edge states into each other is possible even without hitting a resonant energy state of the antidot. Therefore the conductance over this energy range is smaller than σ 0 [see E 0.2 ω c in Fig. 2(a) and E 0.35 ω c in Fig. 2 Now it is clear that the observed resonances in the conductance are related to the BSs of the hole. Therefore, we now describe the method to obtain the BSs localized to the hole and investigate the magnetic field dependence of these resonances. In order to obtain the energy eigenvalues of the BSs we constructed the TB Hamiltonian of a strip including a hole of a given radius. The Hamiltonian was constructed in the framework of the TB model described in Sec. IV [see Eqs. (5)-(9)], excluding sites located inside the hole. Thus the hole inside the graphene strip was terminated by an atomically sharp edge where we did not assume any surface reconstruction of the lattice and the dangling bonds of the sites were not compensated either. We numerically checked that the energy eigenvalues of the BSs are sensitive to the lattice termination over the perimeter of the hole, so one would find unique results for each antidot. The calculated wave functions of the BS's are also highly anisotropic due to the roughness of the lattice termination over the perimeter as shown in the left panels of Fig. 2. However, the qualitative properties of the calculated energy eigenvalues as a function of the magnetic field are robust. Since our goal here is to reveal physical properties of individual graphene antidots, we do not study the statistical properties of the BSs. If the dimensions of the strip are large compared to the DL of the edge states ξ i the BSs become separated from the edge states since the BSs wave function exponentially vanishes close to the edges. Thus, the energy eigenvalues of the BS's are robust to small variations in the dimensions of the strip which provides an efficient way to numerically separate BSs from the edge states. Figure 3 shows the energy eigenvalues of the BSs calculated for the hole corresponding to Fig. 2(b) in the energy range of 0 < E n < ω c as a function of the φ magnetic flux inside the hole. As one can see, the magnetic field dependence of the energy eigenvalues is nonmonotonic; however, at higher magnetic fields (φ 3φ 0 ) the E n / ω c energy eigenvalues are expected to vary linearly with φ. In Fig. 3 one also can observe anticrossing points of the energy lines. The anticrossings, even if they have no direct effect on the conductance, show interesting physical properties of the antidot. Indeed, according to the calculations of Ref. 24 based on the Dirac Hamiltonian, the valley degeneracy of the energy eigenvalues is lifted due to the boundary condition at the edge of the antidot. Near atomically sharp scattering centers, such as edge terminations, the valley mixing in the electron states is enhanced due to scattering events with large momentum difference and results in the anticrossings of the energy lines. For small magnetic fluxes the energy eigenvalues approach to the closest Landau level. However, at low magnetic field it becomes difficult to calculate the energy eigenvalues of the BSs in the TB framework. Since the DL of the wave functions becomes large for small magnetic fields, one needs to consider exception-ally large strips in order to separate the BSs from edge states. Hence in Fig. 3 we calculated the energy eigenvalues of the BSs only for magnetic fluxes larger than ∼ 0.5φ 0 . Since the TB model discussed above manifests an electron-hole symmetry one can recover identical results in the energy range − ω c < E n < 0. Although in experiments the size of the samples is an order of magnitude larger than what we used in our numerical calculations, the obtained results are still relevant since we expect analogous physical properties for systems obeying the scaling law (W, L, R) → N × (W, L, R) for the dimensions B → B/N 2 (l B → N l B ) for the magnetic field and E F → E F /N for the Fermi energy. Thus, considering samples with a realistic size of W ∼ (0.6-1.0)µm and preserving the aspect ratio of Figs. 2(a) and 2(b), we expect results similar to our calculations at magnetic fields B = (60-240) mT. According to the calculated energy eigenvalues of the BSs shown in Fig. 3 the average level spacing of the BSs at higher magnetic fields is about ∆E BS ≈ 0.08 ω c ∝ √ B. For a magnetic field in the range of B = (60-240) mT the level spacing reads ∆E BS ≈ (0.7-1.4) meV. In experiments one could reach the quantum Hall regime for suspended graphene samples at a temperature of T ∼ 4K and a magnetic field of B ∼ 100 mT. 9,13 Since the temperature broadening k B T ∼ 0.3 meV is smaller than ∆E BS [and ∆ 0 in Eq. (4) vanishes exponentially with the magnetic field], it is feasible to observe the predicted resonances in the conductance in experiments, especially at higher magnetic fields or at lower temperatures than stated above. IV. DETAILS OF THE NUMERICAL CALCULATIONS In this section we present the details of our numerical calculations for the conductance shown in Fig. 2. The transmission was calculated employing the Green's function technique of Refs. 25 and 26 based on the nearestneighbor TB model of graphene. 23 However, in our approach we calculate the Green's function of the scattering region in a more efficient way than other Green's function techniques available in the literature. [25][26][27] We construct the scattering region from a translationally invariant ribbon, and by using the Dyson equation 28 we can remove or add necessary sites to the TB model in order to obtain the desired structure. This procedure involves only sites that are directly related to the inhomogeneities of the scattering region, and thus our approach is more efficient than the other Green's function methods, where the computations involve all sites of the scattering region. The electrodes in the calculations are assumed to be heavily doped, semi-infinite graphene nanoribbons. The nanoribbons and the scattering region including the hole are considered to be perfectly ballistic and the magnetic field is incorporated in the system by means of the Peierls substitution. 29 To understand the basic idea of our approach we first explain it on a scattering region including no hole or other scattering sites, and we set the value of the magnetic field to zero. where H 0 is the Hamiltonian of one UC: (6) (with δ ij the Kronecker delta), and the matrix H 1 contains the hopping amplitudes between nearest-neighbor UCs: where a is the translational vector of the ribbon (see Fig. 5), R i points to the ith site of the lattice, and ε i is the on-site energy. In our calculations we take into account only nearest-neighbor hopping amplitudes γ ij ≡ γ, where sites i and j denote nearest-neighbor carbon atoms in the honeycomb lattice for which |R i − R j | = r C-C [see Fig. 5(b)]. In addition, the magnetic field can be incorporated by means of the Peierls substitution 29 with the flux quantum φ 0 = h/e and the vector potential describing the magnetic field B = (0, 0, where the z direction is perpendicular to the plane of the ribbon andâ = a/|a|. Note that A(r) is translationally invariant along the ribbon, and hence the matrices lated Green's functions can be arranged into a matrix form as Since the structure of the ribbon contains only nearest neighbor hoppings between the UCs without long range interaction, the effective Hamiltonian defined as H eff = EI − G −1 will have the following structure: Note that there is no effective coupling between UCs 0 and N since these UCs are coupled via UC 1, and therefore the matrix element H eff 0N vanishes. For similar reasons the matrix elements H eff N 0 , H eff 1(N +1) , H eff N 0 , H eff (N +1)0 and H eff (N +1)1 also become zeros. Using the Dyson's equation, let us apply a perturbation to the Hamiltonian H eff given by The potentials V 1 and V 2 uncouple a strip of length L N from the rest of the ribbon. The perturbed Green's function of the ribbon defined byG −1 = EI − H eff − V 1 − V 2 will then fall apart into three separate subsystems: where g L (g R ) is the surface Green's function of the semiinfinite lead terminated by UC 0 (N + 1) from the right (left), and is the surface Green's function of the strip terminated by UCs 1 and N . Note that the number of sites included in the calculations above is determined by the number of sites of one UC. Therefore the computational cost of calculating the surface Green's function G strip in the case of a graphene strip will be proportional to the width of the strip instead of its area. We verified our numerical approach in two cases. First, the conductance calculations can be performed analytically on systems where the UC contains only one degree of freedom as in the case of onedimensional chains. 30 Furthermore, we verify our numerical method by calculating the conductivity of a graphene strip (the conductivity is defined as σ = CL/W , where L is the length, W is the width, and C is the conductance of the strip). Figure 4 shows the calculated conductivity of a zigzag edged graphene strip as a function of the aspect ratio W/L. As the aspect ratio becomes larger than 2 the calculated conductivity tends to the value obtained for bulk graphene 31 as demonstrated in Ref. 32. (ii) Graphene antidot (R > 0 and B > 0). To calculate the surface Geen's function of a graphene antidot we follow the procedure described in the previous section. First we calculate the Green's function g zz ′ at the sites of UCs z, z ′ ∈ {0, 1, N, N + 1, {H}}, where {H} stands for UCs having finite overlap with the hole to be created. The calculated Green's function then reads as where G is given by Eq. (10) where the matrix H SP,SP has the same structure as H eff given by Eq. (11). Now we break all bonds between the hole and the rest of the ribbon by applying a perturbation to the Hamiltonian H eff,H given by the potential V H = −H H,SP |H SP | − H SP,H |SP H|. We also apply the perturbations V 1 and V 2 in order to split the ribbon into three pieces as described in the previous subsection. Finally one can extract the surface Green's function of the graphene antidot from the Green's functioñ G H = EI − H eff,H − V 1 − V 2 − V H −1 of the perturbed system as in Eq. (12). Utilizing the current framework, one can also include scattering impurities in the calculations as atomic vacancies in the lattice which can be treated in the same way as the hole discussed above and/or scattering potentials that can be implemented via Dyson's equation. In the latter case the bonds coupling the perturbed sites to the rest of the lattice are not removed. The calculations described above involve a number of sites determined by the perimeter of the hole and by the width of the ribbon. Therefore, the required computational effort of the approach scales linearly with the dimensions of the scattering region. V. SUMMARY In this paper we showed that BSs generated around a graphene antidot by a perpendicular magnetic field can be detected by studying the two-terminal conductance in the ballistic regime. According to our calculations BSs localized to a hole in a graphene strip can be detected via the observation of Breit-Wigner-like resonances arising in the conductance as a function of the Fermi level. The resonances can be observed close to the energies of the BSs. The shift of the resonances measured from the energies of the BS's is determined by the coupling strength between the BSs and the edge states of the ribbon. According to our estimates it is feasible to observe the appearance of the resonances in the conductance also in experiments. In addition we also provided an efficient numerical method to calculate the surface Green's function of scattering regions containing several times ∼ 10 5 sites. The present numerical method, as an extension of the theoretical framework reported in Refs. 25 and 26, involves only sites that are directly related to the inhomogeneities of the scattering region. Thus our numerical approach is more efficient than other recursive Green's function techniques and can be utilized to study transport properties of other systems as well.
2014-09-25T09:26:21.000Z
2014-08-07T00:00:00.000
{ "year": 2014, "sha1": "e5cc08d2eff6f24653899800894aa8d4198de8db", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1408.1517", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e5cc08d2eff6f24653899800894aa8d4198de8db", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
254406012
pes2o/s2orc
v3-fos-license
Alzheimer’s disease like neuropathology in Down syndrome cortical organoids Introduction: Down syndrome (DS) is a genetic disorder with an extra copy of chromosome 21 and DS remains one of the most common causes of intellectual disabilities in humans. All DS patients have Alzheimer’s disease (AD)-like neuropathological changes including accumulation of plaques and tangles by their 40s, much earlier than the onset of such neuropathological changes in AD patients. Due to the lack of human samples and appropriate techniques, our understanding of DS neuropathology during brain development or before the clinical onset of the disease remains largely unexplored at the cellular and molecular levels. Methods: We used induced pluripotent stem cell (iPSC) and iPSC-derived 3D cortical organoids to model Alzheimer’s disease in Down syndrome and explore the earliest cellular and molecular changes during DS fetal brain development. Results: We report that DS iPSCs have a decreased growth rate than control iPSCs due to a decreased cell proliferation. DS iPSC-derived cortical organoids have a much higher immunoreactivity of amyloid beta (Aß) antibodies and a significantly higher amount of amyloid plaques than control organoids. Although Elisa results did not detect a difference of Aß40 and Aß42 level between the two groups, the ratio of Aß42/Aß40 in the detergent-insoluble fraction of DS organoids was significantly higher than control organoids. Furthermore, an increased Tau phosphorylation (pTau S396) in DS organoids was confirmed by immunostaining and Western blot. Elisa data demonstrated that the ratio of insoluble Tau/total Tau in DS organoids was significantly higher than control organoids. Conclusion: DS iPSC-derived cortical organoids mimic AD-like pathophysiologyical phenotype in vitro, including abnormal Aß and insoluble Tau accumulation. The molecular neuropathologic signature of AD is present in DS much earlier than predicted, even in early fetal brain development, illustrating the notion that brain organoids maybe a good model to study early neurodegenerative conditions. Introduction: Down syndrome (DS) is a genetic disorder with an extra copy of chromosome 21 and DS remains one of the most common causes of intellectual disabilities in humans. All DS patients have Alzheimer's disease (AD)-like neuropathological changes including accumulation of plaques and tangles by their 40s, much earlier than the onset of such neuropathological changes in AD patients. Due to the lack of human samples and appropriate techniques, our understanding of DS neuropathology during brain development or before the clinical onset of the disease remains largely unexplored at the cellular and molecular levels. Methods: We used induced pluripotent stem cell (iPSC) and iPSC-derived 3D cortical organoids to model Alzheimer's disease in Down syndrome and explore the earliest cellular and molecular changes during DS fetal brain development. Results: We report that DS iPSCs have a decreased growth rate than control iPSCs due to a decreased cell proliferation. DS iPSC-derived cortical organoids have a much higher immunoreactivity of amyloid beta (Aß) antibodies and a significantly higher amount of amyloid plaques than control organoids. Although Elisa results did not detect a difference of Aß40 and Aß42 level between the two groups, the ratio of Aß42/Aß40 in the detergent-insoluble fraction of DS organoids was significantly higher than control organoids. Furthermore, an increased Tau phosphorylation (pTau S396) in DS organoids was confirmed by immunostaining and Western blot. Elisa data demonstrated that the ratio of insoluble Tau/total Tau in DS organoids was significantly higher than control organoids. Conclusion: DS iPSC-derived cortical organoids mimic AD-like pathophysiologyical phenotype in vitro, including abnormal Aß and insoluble Tau accumulation. The molecular neuropathologic signature of AD is present in DS much earlier than predicted, even in early fetal brain development, illustrating the notion that brain organoids maybe a good model to study early neurodegenerative conditions. KEYWORDS Down syndrome, Alzheimer's disease (AD), iPSC, proliferation, amyloid-beta, tau pathology, cortical organoid Introduction Down syndrome (DS) is a genetic disorder with an extra copy of chromosome 21, characterized by physical growth delay, mild to moderate intellectual disability, and characteristic facial features. DS is one of the most common causes of intellectual disabilities in humans, with an incidence of one in 700 newborns. Moreover, DS patients often have an increased risk to develop many other health problems including Alzheimer's disease (AD), obstructive sleep apnea, congenital heart defect, and leukemia (Asim et al., 2015). Almost all DS patients have AD-like neuropathological changes (AD-DS) such as accumulation of plaques and tangles at about 40 years of age. Approximately 40%-80% of DS patients develop AD-like dementia by 50-60 years, much earlier than the majority of AD patients (Oliver and Holland, 1986;Holland et al., 1998;Zigman et al., 2004). DS and AD patients share many neuropathological changes such as amyloid beta accumulation, tau pathology, endosomal dysfunction, synaptic dysfunction, and neurogenesis defects. The earliest neuropathological changes such as enlarged endosomes, impaired synaptogenesis, and neurogenesis can be traced back to early life even at the fetal brain in DS patients (Marin-Padilla, 1972;Wisniewski et al., 1984;Cataldo et al., 2000;Baburamani et al., 2019;Patkee et al., 2020;Tang et al., 2021). However, due to the lack of human samples and appropriate techniques, our understanding of DS neuropathology during brain development or before the clinical onset of the disease remains largely unexplored at the cellular and molecular levels. Induced pluripotent stem cell (iPSC) technology, first introduced by Yamanaka in 2007 (Takahashi et al., 2007) has been widely used by many laboratories to study a variety of human diseases, including Alzheimer's disease (Kondo et al., 2013), Parkinson's disease (Hargus et al., 2010), and Autism spectrum disorder (Russo et al., 2019). iPSC-derived 3D cortical organoids have been shown to closely simulate the key endogenous neurodevelopmental events with a cytoarchitecture resembling regions of the developing human brain (Paşca, 2018) and recapitulate the trajectory of human brain development and maturation (Lancaster and Knoblich, 2014;Paşca et al., 2015). Moreover, brain organoids have a transcriptome profile that is close to that in the early human brain (Camp et al., 2015;Paşca et al., 2015;Nascimento et al., 2019;Trujillo et al., 2019). Therefore, iPSC-derived brain organoids represent an optimized approach for modeling neurodevelopmental disorders such as Down syndrome and allow us to explore the earliest cellular and molecular changes during DS fetal brain development. To our knowledge, currently only one published study used iPSC-derived organoids to investigate the AD-like pathology to compare a DS patient and a healthy control (Gonzalez et al., 2018). Here, by comparing DS-specific iPSC lines and their isogenic control iPSC lines, we demonstrate that abnormalities in Down syndrome start as early as the iPSC stage and AD-like neuropathological phenotype including abnormal Aß accumulation and Tau pathology progressively manifest themselves in organoids during early development. Karyotype of iPSC lines All iPSC lines were obtained from Dr. Stuart Orkin at Boston Children's Hospital through a material transfer agreement. The use of cells was approved by IRB at the University of California San Diego. Two DS iPSC subclones and two isogenic control subclones were isolated from DS1-iPS4 cells as previously described (Maclean et al., 2012). The iPSCs were fed daily with mTeSR medium. DNA was extracted from iPSCs using a DNase Blood and Tissue kit (Qiagen, CA), the presence of an extra copy of chromosome 21 in DS iPSCs was confirmed by a high-resolution karyotyping performed by the Cell Line Genetics (Madison, WI). Real time PCR RNA from iPSCs was extracted using the RNase Mini kit (Qiagen, CA). One microgram of total RNA was converted to complementary DNA using SuperScript First-Strand Synthesis System for RT-PCR (Life Technologies, CA). Real time PCR was performed using a GeneAmp 7900 sequence detection system with POWER SYBR Green (Applied Biosystems, CA). Gene expression of GAPDH levels was used as a loading control. Real time PCR data were presented after normalization with GAPDH expression. Primers used in the current study are listed in Table 1. Cell growth analysis Both DS and isogenic control iPSCs were seeded in a 6-well plate with a density of 10 5 live cells/well and total cells are collected and counted after 3, 5, and 7 days in culture using Bio-Rad TC20 cell counter. The cell numbers were compared between the two groups. Cell proliferation experiments DS and isogenic control iPSCs were seeded in a 4-well chamber slide with a density of 10 4 live cells/well and cultured for 5 days. Immunostaining of Ki67 in iPSCs was followed the standard immunofluorescence staining protocol. Double labeling of EdU and BrdU was performed as previously described (Deshpande et al., 2017) with minor modifications. In brief, iPSCs were treated with EdU (20 µM) for 1 h and then with BrdU (10 µM) for an additional 1 h of incubation. Cells were fixed with 4% paraformaldehyde for 15 min and then permeabilized with 0.5% triton in PBS for 20 min. DNA was denatured with 4 M HCl for 20 min and followed by phosphate citric acid buffer for 10 min. EdU detection was performed using Click-iT TM Plus EdU Cell Proliferation Kit following the manufacturer's protocol. BrdU detection was performed following the standard immunofluorescence protocol with BrdU antibody (clone MoBU-1). Nuclei were stained with Hoechst 33342. EdU + and BrdU + cells were compared between the two groups. Generation of cortical organoids Cortical organoids were generated from iPSCs and organoid spheres were kept in suspension under rotation (95 rpm) as previously described (Camp et al., 2015;Trujillo et al., 2019;Yao et al., 2020). In brief, on day 0, iPSCs colonies were dissociated into single cells using accutase with PBS at a ratio of 1:1, approximately 4 × 10 6 cells were transferred to one well of a 6-well plate in mTeSR1 supplemented with 5 µM Y-27632, 10 µM SB431542 (SB), and 1 µM Dorsomorphin (Dorso) for 3 days. Y-27632 was removed after 24 h; on day 3, mTeSR1 was substituted by base medium containing neurobasal, glutamax, 1% MEM nonessential amino acids (NEAA), 2% Gem21, and 1% penicillin/streptomycin (PS) supplemented with 1% N 2 , 10 µM SB, and 1 µM Dorso. The medium was changed every other day. On day 9, organoids were fed with a base medium supplemented with 20 ng/ml FGF2 for a week and the medium was changed every day. On day 16, the medium was switched to base medium supplemented with 20 ng/ml of FGF2 and 20 ng/ml EGF. On day 22, organoids were fed with the base medium supplemented with 10 ng/ml of BDNF, 10 ng/ml of GDNF, 10 ng/ml of NT-3, 200 µM L-ascorbic acid, and 1 mM dibutyryl-cAMP. Four weeks later, cortical organoids were maintained in the base medium with media changes twice a week. All reagents and chemicals used in the current study are listed in Table 2. Elisa of amyloid beta and tau Detergent soluble and insoluble fractions of amyloid beta (Aβ40, Aβ42) and total Tau were quantified using Elisa kits following the manufacturer's instructions. Soluble and insoluble fractions were prepared based on the previously published study (Wang et al., 2021). In brief, organoids were homogenized in RIPA buffer with protease inhibitors, the homogenate was centrifuged at 15,000× g for 10 min, and the supernatant was collected to yield a detergent soluble fraction. The remaining organoid pellet was then resuspended in 5 M guanidine-HCl diluted in 50 mM Tris, pH 8.0 with protease inhibitor and mechanically agitated at room temperature for 4 h to extract the detergent-insoluble fraction. Guanidine treated samples were diluted 1:2 in sterile PBS and centrifuged at 16,000× g for 20 min. The supernatant was collected to yield an insoluble fraction. The protein concentration of both soluble and insoluble supernatants was determined using a BCA assay and an equal amount of total protein was used for the Elisa assay. Immunofluorescence staining Organoids were fixed with 4% paraformaldehyde for 30 min and then were transferred to 30% sucrose solution at 4 • C overnight. Cryopreserved organoids were embedded in OCT and sectioned into 14 µm-thick slices for immunofluorescence staining. The sections were treated with 0.5% triton in PBS for 20 min and followed by blocking with 5% BSA in PBS for 1 h at room temperature. The sections were then incubated with the primary antibody (see Table 2) diluted in PBS with 5% BSA overnight and Alexa 488 and Alexa 555 conjugated secondary antibodies to specific IgG types for 1 h. Prolong Diamond anti-fade mountant with DAPI was used as a counterstain (Sigma, St. Louis, MO). Amylo-Glo staining Amyloid plaque staining on organoid sections was performed using the Amylo-Glo RTD amyloid plaque stain reagent (Biosensis, Australia) following the manufacturer's instructions. Western blot Twenty micrograms of soluble protein from 12 weeks DS and control organoids were separated on NuPAGE 4%-12% Bis-Tris gels and then transferred to polyvinylidene difluoride membranes (Millipore, CA). The membranes were probed with a primary antibody in PBST with 5% BSA overnight at 4 • C and followed by an appropriate horseradish peroxidase-conjugated secondary antibody (Invitrogen, CA). Immunoreactive bands were visualized using Bio-Rad ChemiDoc XRS with enhanced chemiluminescence (Perkin Elmer, MA). Equal loading was assessed using GAPDH and data were analyzed using ImageLab software (version 3.0, Bio-Rad). Imaging analysis All immunofluorescence images were captured using a 20× objective on a Nikon A1 confocal microscopy with NIS elements AR 5.20.02 software (Nikon Instruments Inc, Melville, NY). All images that were compared were obtained with identical settings and quantified using FIJI/ImageJ (version 2.5.0). Statistical analyses All data analysis and plots were done by OriginPro 2018b software (Northampton, MA, USA). All data were subjected to Shapiro-Wilk normality testing, normally distributed data were analyzed by one way ANOVA and non-normal distributed data were analyzed by non-parametric Mann-Whitney test. Results are expressed as mean ± SEM and the threshold for statistical significance (p-value) was set at 0.05. All the experiments were repeated at least three times. Characterization of DS and isogenic control iPSCs To confirm the presence of an extra copy of chromosome 21 in DS lines, stem array, a higher resolution of karyotyping, was performed and confirmed that DS iPSC lines indeed have a third copy of chromosome 21 while the isogenic control iPSC lines have a normal karyotype (Figure 1A). To examine whether an extra copy of chromosome 21 results in overexpression of chromosome 21-encoded genes, we compared the expression profile of six genes between DS iPSCs and isogenic control iPSCs using real time PCR. Five of them were known to be overexpressed in both DS and AD patients (Gomez et al., 2020) and DSCAM was known to play an important role in synaptic plasticity and maturation (Stachowicz, 2018;Chen et al., 2022). As shown in Figure 1B, all six chromosome 21-encoded genes tested have a significantly increased expression in DS iPSCs as compared to isogenic controls ( Figure 1B Decreased cell growth in DS iPSCs Previous studies have shown that DS fibroblasts and neural progenitor cells exhibited decreased cell proliferation or increased apoptosis (Kimura et al., 2005;Gimeno et al., 2014;Hibaoui et al., 2014), while DS astrocyte precursor cells exhibited an accelerated proliferation (Kawatani et al., 2021). Since we made the preliminary observation in our experiments that the expansion of DS iPSC lines is slower than control iPSC lines, we wanted to know whether the rate of proliferation or cell death is different in DS from control. We first cultured both DS and control iPSC lines with the same number of live cells (10 5 cells) at day 0, and total cell numbers were counted at day 3, day 5, and day 7. As shown in Figure 2A, there was a significantly reduced number of DS iPSCs as compared to control lines at 5 and 7 days in culture (Figure 2A and Supplementary Figure 2A, p < 0.05 and p < 0.001). We next used Ki67, BrdU, and EdU as cell proliferation markers to compare the number of proliferating cells between DS iPSCs and control iPSCs. Ki67 labels cells in the G1, S, G2 and M phase. BrdU and EdU label cells in the S phase only. Interestingly, since most of the DS and control iPS cells were immunoreactive for Ki67 + (Figure 2B), we compared and quantified the BrdU + and EdU + cells between the two groups. As shown in Figures 2C,D, DS iPSCs have a significantly decreased number of BrdU + and EdU + cells than control cells (Figures 2C,D and Supplementary Figure 2B, p < 0.01), suggesting a decreased proliferation in DS iPSCs. To exclude a potential effect of loss of pluripotency on a decreased proliferation in DS iPSCs, we confirmed that both DS and control iPSCs remain in a stem cell state at day 5, positive for pluripotency markers TRA-1-60, Nanog, and SOX2 and negative for neural progenitor marker nestin (Supplementary Figure 2C). Last, we used cleaved caspase 3 as a cell death marker to compare the number of cell death between DS and control iPSCs. There were no significant differences in cleaved caspase 3 + cells between the two groups ( Figures 2E,F and Supplementary Figure 2D, p = 0.21), suggesting that a slower growth of DS iPSCs is not due to an increased cell death but due to a decreased proliferation. Abnormal accumulation of Aß in DS organoids Accumulation of Aß is a major AD-like neuropathology feature in DS patients. Aβ is a cleavage product of APP through sequential proteolytic processing by βand γ-secretases, a process that generates a number of Aβ isoforms with 36-43 amino acid residues in length. Aβ40 and Aβ42 are two major isoforms and the longer isoform, e.g., Aβ42, is more prone to form aggregates and be more toxic. Most amyloid plaques contain beta amyloid with 40 and 42 isoforms (Antzutkin et al., 2000;Balbach et al., 2002;Gu and Guo, 2013). In the current study, we used three different methods to examine the presence of Aß accumulation in DS organoids. First, immunostaining was performed on 8-week-old and 12-weekold organoids using two different Aß antibodies, D54D2 and 82E1 (Horikoshi et al., 2004;Ruiz-Riquelme et al., 2021), that recognize Aß 37-42 and Aß 40-42, respectively. In general, there are much more immunoreactivity of D54D2 than 82E1 in both 8-week and 12-week organoid sections. Immunoreactivity of D54D2 was significantly increased in both 8-week (data not shown) and 12-week-old DS organoids (Figures 3A,B, p < 0.01) as compared to control organoids. In contrast, immunoreactivity of 82E1 was hardly detected in 8-week organoid sections but can Characterizations of DS iPSC and isogenic control cell lines. (A) DS iPSC (top) and isogenic control (bottom) lines have correct karyotypes. (B) Overexpression of chromosome 21 genes in DS iPSCs was confirmed by real time PCR (n = 4). *p < 0.05 and ***p < 0.001 vs. control. be detected in the 12-week organoid sections. The difference in 82E1 immunoreactivity between DS and control organoids was significant at 12 weeks (Figures 3A,B and Supplementary Figures 3A,B, p < 0.01). Next, we used Amylo-Glo plaque stain reagent to stain the amyloid plaque in the organoid sections and observed a significantly increased Amylo-Glo + staining in DS organoids than control organoids (Figures 3C,D and Supplementary Figure 3C, p < 0.05). Lastly, we used Elisa to quantify amyloid beta including Aß40 and Aß42 in the detergent soluble and insoluble fractions of DS and control organoids. No significant difference of Aß42, Aß40, or Aß42/Aß40 was observed in the soluble fraction between the two groups in either 8 weeks or 12 weeks. Instead, we found a significantly increased Aß42/Aß40 in the insoluble fraction of 12 weeks DS organoids than that in control organoids at the same age ( Figure 3E and Supplementary Figure 3D, p < 0.05). Abnormal accumulation of tau pathology in DS organoids Hyperphosphorylation of Tau is another AD-like pathological hallmark in DS patients and occurs following Aß accumulation. We, therefore, compared the immunoreactivity of phospho-tau S396 (pTau S396, PHF-1), a widely used antibody to study Tau pathology in AD, in DS and control organoids at 12 weeks (Citron et al., 1994;Foidl and Humpel, 2018;Aragao Gomes et al., 2021). Immunoreactivity of phosphorylated Tau S396 was significantly increased in DS organoids as compared with control organoids (Figures 4A,B and Supplementary Figures 4A,B, p < 0.001). This increased phosphorylation of Tau was also confirmed by Western blot when using pTau S396 and total Tau (A-10) antibody: the ratio of pTau S396/Tau was significantly increased in DS organoids than control organoids (Figures 4C,D and Supplementary Figure 4C, p < 0.001). The pathologic hyperphosphorylation of tau proteins induces the formation of insoluble aggregates and neurofibrillary tangles (NFTs) that abnormally accumulate inside neurons in AD or AD-like DS brains (Mondragon-Rodriguez et al., 2014). Levels of soluble and insoluble tau reflect the overall status of tau phosphorylation in vivo (Hirata-Fukae et al., 2009) and the insoluble tau correlates with the pathological features of tauopathy (Ren and Sahara, 2013). Therefore, we measured the amount of Tau in the soluble and insoluble fractions of organoids using Elisa. The ratio of insoluble Tau/total Tau was significantly increased in DS organoids at 12 weeks as compared to control organoids ( Figure 4E and Supplementary Figure 4D, p < 0.05), further confirming a relatively increased insoluble tau aggregates in DS organoids. Discussion Virtually all DS patients over 40 years of age have AD-like neuropathology including abnormal Aß accumulation and neurofibrillary tangles (Wisniewski et al., 1985;Mann and Esiri, 1989). According to the National Down Syndrome Society, about 30% of DS patients are diagnosed with dementia in their 50s, and 50% of DS patients are diagnosed with dementia by their 60s. However, how the disease progresses or how early does it start in DS or AD-DS brain before the clinical Decreased cell growth in DS iPS cells. (A) iPSCs were seeded in a 6-well plate with a density of 10 5 live cells/well and total cell numbers were counted after 3, 5, and 7 days in culture. DS iPSCs have a significantly reduced cell number as compared to control iPSCs at day 5 (n = 12, p < 0.05) and day 7 (n = 12, p < 0.001). (B) A representative immunostaining of Ki67 in DS and control iPSCs. (C) A representative double labeling of BrdU (green) and EdU (red) in DS and control iPSCs. (D) Immunohistochemical analysis of BrdU + and EdU + cells revealed that DS iPSCs have a significantly decreased number of BrdU + and EdU + cells than control iPSCs (n = 1,690-1,897 cells, p < 0.01). (E) A representative immunostaining of cleaved caspase 3 in DS and control iPSCs. (F) Immunohistochemical analysis of cleaved caspase 3 revealed a similar percentage of cell death in DS and control iPSCs (n = 1,045-1,570 cells, p > 0.05). *p < 0.05, **p < 0.01, and ***p < 0.001 vs. Control. onset of the disease is still obscure and much less explored. IPSC-derived 3D organoids closely simulate key endogenous neurodevelopmental events with a cytoarchitecture resembling the developing human brain (Helen Zhao et al., 2021) with the trajectory of human brain development and maturation (Trujillo et al., 2019). Therefore, we used DS-specific iPSC-derived brain organoids as a model system to study the AD-like disease pathology in DS. Clinically, individuals with DS have an overall reduced brain volume. Hypocellularity has been associated with impaired neurogenesis and lower proliferative rate potency that can be observed as early as 24 weeks during gestation Abnormal Aß accumulation in DS iPSCs-derived cortical organoids. (A) A representative Aß immunostaining in 12 weeks DS and isogenic control organoids with two different antibodies D54D2 (Aß37-42, green) and 82E1 (Aß40-42, red). (B) Immunohistochemical analysis of Aß antibodies D54D2 and 82E1 revealed a significantly increased Aß immunoreactivity in DS organoids (n = 10-11, p < 0.01). (C) A representative Aß plaque staining in DS and control organoids using Amylo-Glo. (D) Immunohistochemical analysis of Amylo-Glo revealed a significantly increased amyloid plaque load in DS organoids (n = 8, p < 0.05). (E) Aß40 and Aß42 in the soluble and insoluble fractions of 8 weeks and 12 weeks DS and control organoids were quantified using Elisa and the ratio of Aß42/Aß40 was significantly increased in the insoluble fractions of 12 weeks DS organoids (n = 10-11, p < 0.05). *p < 0.05, and **p < 0.01 vs. Control. in the DS brain (Stagni et al., 2018;Utagawa et al., 2022). This phenomenon was not only observed in neurons but also in other cell types in vitro. For instance, DS fibroblasts have a decreased proliferation (Gimeno et al., 2014); DS iPSC-derived neural progenitor cells show a decreased proliferation and increased apoptosis (Hibaoui et al., 2014); DS iPSC-derived astrocytes have however an increased proliferation rate than control cells (Kawatani et al., 2021). In the current study, we report that DS iPSCs have a slower growth rate and a decreased proliferation than control Tau pathology in DS iPSCs-derived cortical organoids. (A) A representative pTau S396 (red) and DAPI (blue) immunostaining in 12 weeks DS and isogenic control organoids. (B) Immunohistochemical analysis of pTau S396 revealed a significantly increased pTau S396 immunoreactivity in DS organoids (n = 24-28, p < 0.001). (C) A representative Western blot of pTau S396 and Tau expression in 12 weeks DS and isogenic control organoids. (D) Immunoblotting analysis of pTau S396 and Tau revealed a significantly increased pTau S396/Tau ratio in DS organoids (n = 6, p < 0.001). (E) Both soluble and insoluble Tau in 12 weeks of DS and control organoids are measured and Elisa data revealed that DS organoids have an increased insoluble Tau/total Tau (soluble Tau + insoluble Tau; n = 6, p < 0.05). *p < 0.05, and ***p < 0.001 vs. Control. cells, without any difference in cell death between the two groups. Abnormal accumulation of extracellular amyloid beta (Aß) and intracellular Tau hyperphosphorylation are two major pathological features in the AD brain and the AD-like DS brain. Aß are by-products of the normal cellular metabolism of amyloid precursor protein (APP; Sinha and Lieberburg, 1999). APP is an integral membrane protein encoded by the APP gene located on chromosome 21. APP has two primary endogenous processing pathways including the non-amyloidogenic pathway and the amyloidogenic pathway. Under physiological conditions, the majority of APP is processed through the non-amyloidogenic pathway; a small portion of APP is processed through the amyloidogenic pathway and yields Aβ 37, Aβ38, Aβ40, and Aβ42. Aβ40 is the most abundant form of Aβ, but Aβ42 is more prone to form insoluble aggregate. Under pathological conditions such as in Alzheimer's disease and Down syndrome, altered amyloidogenic pathway and increased APP dose promote Aβ accumulation, Aβ aggregation, as well as the formation of insoluble fibrils into amyloid plaques. In DS patients, abnormal Aß accumulation starts as early as 8 years of age (Leverenz and Raskind, 1998). Here in the organoids, we were actually able to see increased Aß immunostaining and increased accumulation of amyloid plaque in DS cortical organoids at a very early age in 8 and 12 weeks old organoids in culture. In this regard, it is noteworthy to mention that the difference between both Aß 82E1 immunoreactivity and Aß42/Aß40 was not significant at 8 weeks but started at 12 weeks. Skovronsky previously described that the intracellular pool of insoluble Aß accumulates in a time-dependent manner in NT2N neuronal culture (Skovronsky et al., 1998). A time-dependent increasing difference of Aß between DS and control organoids confirmed that abnormal Aß accumulation is a progressive process and a disease-specific property in DS organoids as compared to controls. These data suggest that disease-specific iPSC-derived organoid culture shed important light on the pathophysiology of DS in that abnormal brain formation starts very early in life, probably in fetal life. Elevated phosphorylation of Tau and abnormal aggregation are widely considered pathological hallmarks in AD. Here we show that both protein expression and immunoreactivity of pTau S396 are significantly increased in DS organoids, demonstrating hyperphosphorylation of Tau in DS organoids, which in general leads to Tau protein aggregation. Pathological aggregation of Tau protein forms insoluble twisted fibers, named neurofibrillary tangles, inside the cells and acts as a biomarker in AD-like pathology. Consistently, we found an increased composition of insoluble Tau in DS organoids that may therefore contribute to Tau pathology in DS patients. It is worth mentioning that, we have noticed that differences between individual clones (Supplementary Figures) did exist. For example, the differences between two DS lines occurred so that the p-value of cleaved caspase 3 + cell death did not reach significance (Supplementary Figure 2D, p = 0.21). However, differences between DS and their isogenic control in insoluble tau/tau reached significance in spite of differences in DS clones (Supplementary Figure 4D, p < 0.02). We speculate that the differences between individual lines might be a consequence of differentiation efficiency among batches, but that the difference observed between DS and control is due to disease phenotype. In the current study, we used isogenic lines to avoid genetic variations and confounding factors. Recent Tau studies, however, lead us to reconsider the role of Tau phosphorylation in Alzheimer's disease (Wegmann et al., 2021). Due to the fact that the phosphorylation patterns of physiological and pathological Tau are surprisingly similar and heterogenous, and the phosphorylation levels of Tau seem insufficient to differentiate between healthy and diseased Tau, high phosphorylation does not necessarily lead to Tau aggregation (Wegmann et al., 2021). In addition, the posttranslational modification of Tau other than phosphorylation such as ubiquitination, acetylation, and methylation may also regulate the formation of Tau aggregates (Wegmann et al., 2021). Indeed, abnormal posttranslational modifications have been reported in DS patients (Kerkel et al., 2010;Jones et al., 2013;Sailani et al., 2015;Tramutola et al., 2017) and may need additional examination in future studies. In contrast to a previously published article on DS organoids (Gonzalez et al., 2018), we reported a difference between the DS iPSC lines and their isogenic controls at the iPSC stage. Furthermore, we systemically quantify the abnormalities of AD-like neuropathological phenotype at two time points and demonstrated that the AD-like neuropathology progressively manifests itself in organoids during early development. In summary, we have demonstrated that DS disease-specific iPS cells have a slower growth rate due to decreased proliferation. DS iPSC-derived brain organoids mimic AD-like pathophysiological phenotype including abnormal Aß accumulation and insoluble Tau accumulation. Our data strongly suggest that DS iPSC-derived cortical organoids illustrate well the notion that the molecular pathobiology in DS starts early in brain development and can be used as a model system to study the AD-like pathology and its progress before the clinical onset of the disease. Data availability statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. Ethics statement The studies involving human participants were reviewed and approved by IRB at the University of California San Diego. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements. Author contributions HZ and GH designed the experiments and wrote the manuscript. HZ performed the experiments and data analysis. All authors contributed to the article and approved the submitted version. Funding This project was supported by National Institutes of Health (NIH) grant (1R01DA053372). All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. SUPPLEMENTARY FIGURE 1 Corresponding to Figure 1B: expression profile of chromosome 21 genes from individual DS iPSC and isogenic control iPSC lines. SUPPLEMENTARY FIGURE 2 (A) Corresponding to Figure 2A: total cell numbers were counted after 3, 5, and 7 days in culture from individual DS iPSC and isogenic control iPSC lines. (B) Corresponding to Figure 2D: the number of BrdU + and EdU + cells was summarized from individual DS iPSC and isogenic control iPSC lines. (C) A representative double labeling of SOX2 (red) and Tra-1-60 (green) in DS and control iPSCs at day 5 (top panel); a representative double labeling of Nestin (red) and Nanog (green) in DS and control iPSCs at day 5 (bottom panel left) and in 4-week old control organoids (bottom panel right). Immunostaining of organoids was used as a positive control for Nestin (red) antibody under the same conditions of immunostaining and imaging processes. (D) Corresponding to Figure 2F: the number of cleaved caspase 3 + cells was summarized from individual DS iPSC and isogenic control iPSC lines. SUPPLEMENTARY FIGURE 3 (A) Corresponding to Figure 3B: immunoreactivity of Aß antibody D54D2 and 82E1 was summarized from individual DS and isogenic control iPSC line derived organoids respectively. (B) A representative double labeling of Aß antibody 82E1 (red) and MAP2 (green) in DS and control organoids (top panel). A representative double labeling of S100 (red) and MAP2 (green) in DS and control organoids (bottom panel). (C) Corresponding to Figure 3D: immunoreactivity of Amylo-Glo was summarized from individual DS and isogenic control iPSC line derived organoids. (D) Corresponding to Figure 3E: ratio of Aß42/Aß40 was summarized from individual DS and isogenic control iPSC line derived organoids. SUPPLEMENTARY FIGURE 4 (A) Corresponding to Figure 4B: immunoreactivity of pTau S396 was summarized from individual DS and isogenic control iPSC line derived organoids. (B) A representative double labeling of pTau S396 (red) and MAP2 (green) in DS and control organoids. (C) Corresponding to Figure 4E: immunoblotting analysis of pTau S396/Tau was summarized from individual DS and isogenic control iPSC line derived organoids. (D) Corresponding to Figure 4D: the ratio of insoluble Tau/total Tau (soluble Tau + insoluble Tau) was summarized from individual DS and isogenic control iPSC line derived organoids.
2022-12-08T14:34:49.691Z
2022-12-08T00:00:00.000
{ "year": 2022, "sha1": "9a8fa4cd46f7dc458a5a206f2535af0b6d917f5a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "9a8fa4cd46f7dc458a5a206f2535af0b6d917f5a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
85021722
pes2o/s2orc
v3-fos-license
Assessing Bioenergy Harvest Risks: Geospatially Explicit Tools for Maintaining Soil Productivity in Western Us Forests Biomass harvesting for energy production and forest health can impact the soil resource by altering inherent chemical, physical and biological properties. These impacts raise concern about damaging sensitive forest soils, even with the prospect of maintaining vigorous forest growth through biomass harvesting operations. Current forest biomass harvesting research concurs that harvest impacts to the soil resource are region-and site-specific, although generalized knowledge from decades of research can be incorporated into management activities. Based upon the most current forest harvesting research, we compiled information on harvest activities that decrease, maintain or increase soil-site productivity. We then developed a soil chemical and physical property risk assessment within a geographic information system for a timber producing region within the Northern Rocky Mountain ecoregion. Digital soil and geology databases were used to construct geospatially explicit best management practices to maintain or enhance soil-site productivity. The proposed risk assessments could aid in identifying resilient soils for forest land managers considering biomass operations, policy makers contemplating expansion of biomass harvesting and investors deliberating where to locate bioenergy conversion facilities. Introduction The demand for forest products is projected to increase with global population growth over the next century, while actively managed forest land is projected to significantly decrease [1].The search for carbon neutral alternative energy sources, including forest bioenergy, further increases pressure on the productive capacity of the forest land base.These increasing demands on forest production capacity raise concerns over the capability of forest lands to meet society's demand for forest products.Can forest soils in the western US support more intensive timber harvesting for both traditional and emerging bioenergy markets?Is it reasonable to assume that biomass harvesting and sustainable soil productivity are compatible?Both private and public forest land holders seek answers to these questions for both economic and environmental reasons. Private sector forestry in the western US is under intense domestic and global competition in the wood products market [2,3].To meet changes in global market competition, private forest land holders are shifting rapidly from extensive forestry to intensive forestry [1,4,5].Concomitantly, what was traditionally left as biomass residue following either pre-commercial and commercial thinnings or regeneration harvests, are now under consideration as further streams of revenue and bioenergy from the burgeoning "green economy" [6].The primary question for private forest land holders then becomes, how does intensive forestry, and perhaps utilization of harvest "waste" for bioenergy, affect long-term soil-site productivity and thus long-term net revenue? For public sector forest management the issue is not whether to be globally competitive in the forest products market, but how to maintain forest health and site productivity [7].Decades of fire suppression and changing climatic patterns have left many western forests overstocked and prone to insect and disease attack and catastrophic wildfire events [8,9].Further exacerbating forest health problems, is the fact that public land management agency budgets are shrinking, decreasing their capacity to improve forest conditions.Thus, the question for public land managers is twofold: (1) can emerging biofuel markets offset the cost of forest improvement treatments; and (2) how does removal of biomass affect soil-site properties and thus future productivity? Generally, all forest soil productivity research to date supports the following statement: forest harvest treatments will alter soil physical, chemical, and biological properties [5].The degree and extent of this disturbance is usually site specific [16].Soil disturbance is primarily attributed to soil compaction and displacement of organic matter rich duff and mineral soil by either tracked or rubber-tired ground-based harvest systems [24,25].Reeves et al. [26] estimated that ground-based harvest equipment can disturb up to 15 percent of a unit depending on season and landform.In a summary of disturbance effects on subsequent forest productivity, Grigal [5] loosely estimates that approximately 10 percent of forest productivity is lost following a typical harvest treatment in Douglas-fir and ponderosa pine forests of the western US.However, other research, including that of Grigal [5], have shown that forest productivity loss following harvesting activities is not uniformly observed.Soil disturbance impacts on second rotation vegetation growth has shown positive, negative or no effect, depending on site-specific attributes [16]. Widely divergent responses to similar harvest treatments can be generally summed as a function of forest floor depth (all organic horizons), soil organic matter content, soil texture, quantity of coarse fragments, soil depth and mineralogy [16,27].Deeper, fine-textured soils typically display decreases in forest productivity following compaction and displacement, but are less likely than coarse-textured soils to have productivity reductions due to nutrient removal (biomass harvesting).In contrast, shallower, coarse-textured soils are more likely to manifest an increase in productivity following some level of compaction; but are more susceptible to productivity declines following biomass removal or litter/topsoil displacement [16,28].Pore size redistribution and shifts in soil mineralization rates are responsible for these observed patterns [13].Compaction of fine-textured soil causes suboptimal aeration and drainage, increases soil strength and decreases root growth; whereas, in coarse-textured soil a reduction of macropore space following compaction increases plant available water, decreases drought stress, and thereby prolongs the growing season.However, coarse-textured soil is more susceptible to forest productivity declines following nutrient removals because of shallow forest floor layers and low soil mineralization rates [7].Given this current consensus of harvesting effects on soil physical and chemical characteristics, geospatial soil disturbance risk assessments would be advantageous for identifying resilient soils capable of supporting long-term biomass harvesting.Such geospatial risk assessments are now possible over much of the western US as land resource inventory agencies (e.g., Natural Resource Conservation Service, United States Forest Service, United States Geological Service) provide readily available, spatially explicit, digital geology, soil and vegetation resource inventories [29]. Thus, based on the agreement of forest research findings presented above, here we will illustrate an applied methodology for the purpose of geospatially defining areas of soil sensitivity and providing a geospatial mapping tool that can be developed for planning future forest harvest activities.Forest soil productivity risk assessments will be proposed within a geographic information system (GIS) based on soil and geologic parent material properties and thresholds or limits that have been observed to positively or negatively affect forest growth.The objectives are to: (1) present a risk assessment process that is widely applicable across the western US and beyond; (2) provide a unit-level management tool useful to managers, planners or policy makers for both public and private forest lands; and (3) describe best management practices (BMPs) for the differing risks assessed within a selected area of interest. Timberlands of Western US Ecoregions Western US forests are found across 16 Level III ecoregions [30] (Figure 1).Each of these forested ecoregions are comprised of forestland and timberland, with timberlands being capable of producing >0.57m 3 /commercial wood volume/yr.The focus of this paper will be upon timberlands of the western US, which are the most likely to have the potential to provide biomass for bioenergy production.These western US timberlands total ~52 million ha.Of this land base, private timberland accounts for ~18 million ha and public ~34 million ha [31].We selected the ID612 soil survey area [32], as an example of an explicitly defined timber producing region within the Northern Rockies ecoregion to demonstrate our proposed forest soil productivity risk rating process (Figure 1).This soil survey area has available updated digital soil and geology surveys that can be used to demonstrate the process of developing spatially explicit risk assessments for soil sensitivity to biomass removal impacts.Our risk assessment approach is applicable across timbered ecoregions of the western US where digital soil and geology surveys are available. Survey Area Physiography The ID612 survey is located within the northern Idaho Clearwater Mountains, a sub-range of the Rocky Mountains.This survey area encompasses ~336,000 ha of diverse physiographic features.Landform is characterized by mountainous landscapes to the north and east, while plateaus and benchlands incised with deep canyons are found in the south and west (Figure 2a).Annual precipitation averages <635 mm in the southwest and >1500 mm in the northeast [33].Geology is represented by varying lithologies of igneous, metamorphic and sedimentary parent materials (Figure 2b).Common throughout the area are eolian deposits of Columbia Basin loess and Mt.Mazama volcanic ash, often found as intermixed mantles overlying geologic parent material.Volcanic ash mantles >50cm are commonly found in the mountainous regions of the study area.Soil taxonomic classifications across the south are generally Ultic Argixerolls or Vitrandic Fragixeralfs; whereas in the north, Andic Fragiudalfs or Alfic Udivitrands would be typical [32].Mixed species timberlands dominate this region with ponderosa pine (Pinus ponderosa Dougl.),Douglas-fir [Pseudotsuga menziesii (Mirb.)Franco var.glauca], western redcedar (Thuja plicata Donn) and grand fir [Abies grandis (Douglas ex D. Don) Lindl.] as the primary commercial timber species. Soil Risk Assessments The development of soil nutrient and disturbance risk assessments relied on the assembly of a suite of geospatially explicit databases as obtained through the Idaho Geological Survey (IGS), Natural Resource Conservation Service (NRCS), and the Intermountain Forest Tree Nutrition Cooperative (IFTNC).All layers were clipped to represent only landforms with slopes <45 percent (i.e., upper limit of ground-based harvest activities) [35].A 45 percent slope cutoff was deliberately selected over the traditional 35 percent cutoff as ground-based harvesting equipment in the western US is often used on slopes >35 percent.Thus, the maximum effect of ground-based operations on steeper slopes would be captured.In addition, landforms with slopes <45 percent were deliberately selected for the following analyses as they present the greatest possibility for physical site disturbance from ground based harvesting relative to skyline or helicopter harvesting on steeper slopes [26].Ground-based harvesting on lower slopes also provide higher financial return for biomass removal compared with higher costs associated with other harvesting approaches. From these digital sources, preliminary layers illustrating rock nutrient status and surface soil organic matter content were created to develop a chemical soil property-based nutrient status assessment.Similarly, preliminary layers of soil rutting hazard (compaction and displacement) and soil erosion hazard were created to develop a physical soil property-based disturbance susceptibility assessment.The following sections provide more details on the construction of the soil nutrient status and soil disturbance susceptibility risk assessments. Soil Nutrient Status Soil nutrient status was derived as a combination of (1) rock nutrient status (Figure 3a) and (2) surface soil organic matter content (Figure 3b).To obtain rock nutrient status, a regional, digital geology map (1:100,000 scale) was used to define the major rock lithologies found within the ID612 survey area [32,34].These rock lithologies were then classified into one of four rock nutrient classes based on a modified Reiche's weathering potential equation [36] and forest growth and fertilization research by regional forest scientists and geologists (Table 1) [37][38][39].Rock nutrient status was categorized and scored as good (score 1), moderate (score 2), poor (score 3) or very poor (score 4).For example, a lithology with high weathering potential (i.e., low Si content) and high cation content (K, Ca, Mg, Na) would be ranked as good.A very poor rock nutrient status would be derived from rocks with low weathering potential (i.e., high Si content) and low cation content.For foresters to derive similar rankings in their regions, they must consult with regional forest and geology research scientists as rock nutrient status is not an inherent output feature of digital geology maps. Surface soil organic matter content to a depth of 30 cm was obtained from the ID612 1:24,000 digital soil survey map using the NRCS Soil Data Viewer 5.2 ® extension within ArcGIS 9.3 ® (Figure 3b) [32,40].A soil condition weighted average of percent organic matter content in the surface soil was used to obtain a single value within a soil mapping unit.This was necessary as there are often more than one soil component with varying physical and chemical features within a single mapping unit.A spatial delimiter was used to exclude minor soil components from unduly influencing a mapping unit average.Components were excluded if they occupied 25 percent or less of the land area in a mapping unit.A 25 percent threshold was used due to the complex topography associated with this study area.In regions with less topography, a smaller threshold (e.g., 5-15 percent) would be more appropriate.The weighted map unit average of percent soil organic matter content was then manually classed and scored into four levels: very high (>12% organic matter content, score 1), high (8-12% organic matter content, score 2), medium (4-8% organic matter content, score 3) and low (<4% organic matter content, score 4).These classes, although arbitrary, reflect the relative levels of surface soil organic matter content in this region.These classifications are not necessarily reflective of the range of values found in other regions across the western US and each region of interest should be approached individually to define a regionally pertinent nutrient and organic matter status. Soil Disturbance Susceptibility Soil disturbance susceptibility was built on (1) soil rutting hazard (i.e., compaction and displacement) (Figure 4a) and (2) soil erosion hazard (Figure 4b).These assessments were obtained from the ID612 1:24,000 digital soil survey map using the NRCS Soil Data Viewer 5.2 ® extension within ArcGIS 9.3 ® [32,40].Similar to soil organic matter content, a soil condition weighted average was used with a 25 percent component delimiter to obtain a single classification rating for each mapping unit within the survey area.Soil rutting hazards were developed by the NRCS based on the following considerations: (1) 3-10 passes of equipment on soils near field capacity, (2) operation of standard, non-flotation rubber tired equipment, (3) year-long water tables <30 cm from the soil surface, and (4) soil displacement and puddling that may affect groundwater hydrology and productivity of the site.Rankings were defined as an interaction between depth to water table, rock fragments on or below the soil surface, the Unified Classification Group (textural classes) and slope (Table 2) [41].For example, shallow soils with high coarse fragments and/or coarse texture on flat terrain would not be expected to rut easily, and would thus be ranked as a slight rutting hazard.Conversely, deep, fine-textured soils, with little coarse fragments on steeper slopes would rut and compact readily, and would thus be ranked as a severe rutting hazard.Rutting hazards were classed and scored as slight (score 1), moderate (score 2) and severe (score 3) (Table 2). Soil erosion hazards were developed by the NRCS based on the following considerations: (1) soil susceptibility to sheet and rill erosion from exposed mineral soil surfaces caused by various harvest practices; (2) operational activities that disturb organic surface material resulting in 50 to 75 percent bare ground in the affected area; and (3) the use of any equipment type or size.Rankings were defined as an interaction between slope and the soil erosion factor K w (Table 3) [41].K w , which is used within the Revised Universal Soil Loss Equation (RUSLE), is a function of percent silt, sand and organic matter, soil structure and saturated hydraulic conductivity [42].Slight soil erosion would be expected to occur on flatter terrain, or on soils with high mineral soil organic matter content and saturated hydraulic conductivity, as well as abundant soil cover (surface organic matter, moss, understory vegetation, etc.); whereas, severe soil erosion would occur on those soils with low saturated hydraulic conductivity, steeper slopes, and little protective soil cover.Based on these criteria, soil erosion susceptibility rankings following harvest activities were classified and scored as slight (score 1), moderate (score 2), severe (score 3) and very severe (score 4) (Table 3).Table 3. NRCS soil erosion hazard as a function of slope and the soil erosion factor K w for the ID612 survey area of the Northern Rockies ecoregion, USA [41]. Soil Nutrient and Disturbance Risk Assessments Final risk assessment maps for both soil nutrition and disturbance susceptibility following ground-based harvest treatments were derived by: (1) assigning equal weights to each layer's classification score on a 30 m pixel basis; (2) summing the class scores at each pixel; and (3) obtaining the average pixel score.The average scores across the pixels were then classed into four risk rating categories: low (score 1-1.5), moderate (score 2-2.5), high (score 3-3.5) and severe (score 4) (Figure 5a,b). Soil Nutrient Status Rock nutrition across the ID612 survey area was shown to have moderate to good status (Figure 3a) [37].The primary rock types associated with these classifications were igneous basalt and granite, schist, and carbonate/calcium-rich metasediments.Nutrient poor and very poor parent materials were primarily associated with metasedimentary quartzite and some formations of siltite/argillite, which composed a relatively small proportion of the timberland base [37]. Surface soil organic matter content was loosely correlated with climatic zones found in the region (data not shown).Warmer temperatures and lower precipitation in the south and west of the survey area produce sparse forested communities, thus less organic matter returned annually to the soil, and higher decomposition rates (Figure 3b).However, as precipitation increases to the north, forest biomass increases, returning greater amounts of organic matter annually to the soil.Despite higher organic matter inputs in the north, colder air and soil temperatures usually results in lower decomposition rates (Figure 3b). These results suggest that across the majority of the soils in ID612 there is minimal risk of long-term soil productivity loss due to nutrient removals following biomass harvesting (Figure 5a).However, in the drier ecotypes where biomass production is low and nutrient cycling more rapid, there is a significantly greater risk of nutrient loss, and thus long-term soil productivity loss, if both forest biomass and soil organic matter are removed or displaced. Soil Disturbance Susceptibility With very little exception, the majority of ID612 soils are susceptible to severe rutting (i.e., compaction and displacement) following ground-based harvest activities (Figure 4a).Surface soil parent material, slope and their interaction are the primary factors responsible for this rating.Many of the soil mapping units in this survey area are covered by a variably thick surface mantle of loamy-mixed volcanic ash [32].These ash-influenced soils are fine-textured, have a friable to weak subangular blocky soil structure and are relatively free of coarse fragments, rendering them highly susceptible to rutting.Further, as landform slope increases, equipment operation exacerbates soil compaction and displacement due to unbalanced axle weight allocation as machinery traverses up, down or across slope.Soil rutting is also increased as equipment operation continues when the soils are near field water capacity. Unlike soil rutting, soil erosion susceptibility for ID612 falls predominately into a slight or moderate hazard rating (Figure 4b).The areas that show severe to very severe soil erosion hazards are primarily an integrative function of high silt content and increasing slope (data not shown).Soils high in silt content are more susceptible to erosion following loss of organic matter cover as they are easily detached; tend to crust and generate high rates of runoff [43].The majority of the soils throughout this region however, are capable of absorbing water inputs due to moderate/high soil organic matter content and/or a balanced texture of sand, silt and clay. Best Management Practices A comparison of the integrated nutrient and disturbance risk assessments suggest that loss of nutrients following harvest treatments in ID612 is less of a concern to future soil productivity than is displacement and compaction of the organic rich surface soil (Figure 5a,b).While nutrient loss may be a concern in some areas within the ID612 region, the loss of soil water holding capacity and increased soil strength following compaction of fine-textured soils are of particular concern.Vulnerable soils are capable of losing significant ecosystem function through compaction and displacement, which in turn affects long-term soil-site productivity [7].Consequently, it is appropriate that best management practices are developed to guide silvicultural prescriptions in order to maintain soil function in this region. The development of the soil chemical and physical property risk assessment maps allows us to now spatially define specific best management practices for ground-based harvest treatments across the ID612 survey area.Based on the most current literature reviews [7,43] of harvest effects on soil-site productivity in this region, we developed guidelines for biomass removal, appropriate harvest season and machine traffic limitations (Table 4).Land resource managers can then link these geospatial recommendations within their management information systems to provide long-term silvicultural guidance. We recognize that these BMPs are only the first step towards developing a risk assessment system.Long-term forest soil sustainability is often more influenced by ephemeral conditions such as snow pack, soil moisture, surface organic horizons, and operator skill than by chemical or physical limitations within the soil [26].However, this tool allows land managers to make informed decisions about harvest systems based on site limitations, improve site selection criteria and provide an understanding of the possible soil impacts during harvest operations. In addition, we recognize that this model does not account for alterations in soil biological properties.Usually nitrogen (N) is the key nutrient limiting growth in western US forests, and its availability is dependent on soil microbial activity [7].Often N changes can be linked to changes in soil temperature (removal of the canopy or logging slash).As temperature increases, organic matter on the soil surface and within the mineral soil decomposes rapidly, increasing mineralizable N rates [44]; albeit, large amounts of logging slash on the soil surface may increase immobilization of nutrients until the residues are decayed [45].However, it is crucial to remember that many belowground processes are linked to the retention of coarse woody debris.Thus, any biomass removal operation on public (or private) forest timberland should avoid large losses of coarse woody material in order to maintain ecological function [7]. As with other forest management practices, biomass removal for energy production or forest health requires attention to individual site characteristics and consideration of management objectives as well as long-term sustainability [22].The risk assessments and best management practices proposed here highlight the relative ease with which soil chemical and physical properties can be assessed and linked together to help guide land management decisions.Feller buncher-A motorized vehicle with a grip and cutting attachment that can rapidly cut and gather several trees before felling and bunching them for subsequent forwarding to a landing for delimbing and bucking. 2 Cut-to-length-A motorized vehicle capable of gripping, felling, delimbing and bucking a tree at the stump. Conclusions Forest management practices for energy production should ensure maintenance of long-term soil productivity.To accomplish this, site specific considerations of management objectives can be linked geospatially by using available nutrient data, soil survey data and geology layers, as demonstrated through the development of the proposed risk assessments.The key soil properties that affect soil disturbance, compaction, erosion or nutrient depletion (e.g., soil texture, slope, surface cover and geology) are relatively easy to access from land resource mapping agencies.However, understanding how these changes alter tree nutrition or growth comes from having long-term data on various forest trees species across a variety of stand types and site conditions.In areas where this data is not available, existing literature may provide guidance. Based on our proposed risk rating system and review of the available literature, we developed biomass harvest best management practices that adapt management to the varying chemical and physical soil conditions inherent to western US forests.The proposed soil chemical and property risk assessment process can be expanded to other regions across the western US where digital soil and geologic information is available.Such an approach would aid in identifying resilient soils for forest land managers considering biomass operations, policy makers contemplating expansion of biomass harvesting and investors deliberating where to locate bioenergy conversion facilities. Figure 1 . Figure 1.Forested ecoregions across the western US and the proportion of timberlands held by private and public forest landholders (pie charts).Text reflect the hectares of timberland in each state.The Natural Resource Conservation Service (NRCS) soil survey area-ID612-denotes the area of interest for this paper. Figure 2 . Figure 2. (a) Elevation gradients across the northern Idaho ID612 survey area within the Northern Rockies ecoregion, USA [34].The blue water body represents the outline of Dworshack Reservoir on the North Fork of the Clearwater River.(b) Geologic parent material across the ID612 survey area representing extrusive and intrusive igneous rocks, metasedimentary and metamorphic rocks, with minor components of unconsolidated deposits [34]. Figure 3 . Figure 3. (a) Relative geologic soil parent material nutrition across the ID612 survey area of the Northern Rockies ecoregion, USA [36-39]; (b) Surface soil organic matter classes to a depth of 30 cm across the ID612 survey area[32,40,41].Gray scale areas represent excluded landforms with >45 percent slope[35]. Figure 5 . Figure 5. (a) Nutrient risk assessment to long-term soil productivity following ground-based harvest activities in the ID612 survey area of the Northern Rockies ecoregion, USA; (b) Soil disturbance risk assessment to long-term soil productivity following ground-based harvest activities in the ID612 survey area.Gray scale areas represent excluded landforms with >45 percent slope [35]. Table 2 . [41] soil rutting hazard as a function of soil texture, depth to water table and soil rock fragment content for the ID612 survey area of the Northern Rockies ecoregion, USA.Steeper slope classes (e.g., >20%) may shift ratings to one class more limiting[41]. Table 4 . Best management practices for maintaining soil productivity during ground-based biomass harvest activities by risk assessment class in the ID612 survey area of the Northern Rockies ecoregion, USA.Table colors intended to reflect risk rating color scheme in Figure 5. Manual felling only • Shovel/Tractor yarding limited to winter only • Ensure equipment is matched to site • Monitor soil moisture • Maintain forest floor • Use slash mats and/or balloon tires on wet soil
2015-03-27T18:11:09.000Z
2011-09-20T00:00:00.000
{ "year": 2011, "sha1": "a8981372edc217017ae44c33d7076bc33b16bf5c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4907/2/3/797/pdf?version=1316608253", "oa_status": "GOLD", "pdf_src": "Crawler", "pdf_hash": "a8981372edc217017ae44c33d7076bc33b16bf5c", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
56123033
pes2o/s2orc
v3-fos-license
The Singular Set of Higher Dimensional Unstable Obstacle Type Problems In this paper we will investigate the singular points of the following unstable free boundary problem: {equation}\label{Eq} \Delta u= -\chi_{\{u>0\}} \quad\quad\textrm{in} B_1(0) {equation} where $\chi_{\{u>0\}}$ is the characteristic function of the set $\{u>0\}$. Introduction. In this paper we will investigate the singular points of the following unstable free boundary problem: where χ {u>0} is the characteristic function of the set {u > 0}. This problem was first investigated by G.S Weiss and R. Monneau [14]. In [14], C 1,1 -regularity locally energy minimising and maximal solutions of (1.1) is shown. There is also some discussion regarding the possibility of the existence of singular points, that is points x 0 ∈ B 1 (0) such that u / ∈ C 1,1 (B r (x 0 )) for any r > 0. Such points are proved to be totally unstable [14]. Let us formally define singular points before we proceed. Definition 1.1. Let u be a solution to (1.1). Then we define S(u), the set of singular points of u, according to S(u) = x ∈ B 1 (0); u / ∈ C 1,1 (B r (x)) for any r > 0 . Furthermore we will denote by S n−2 (u) the singular points of co-dimension 2: for some Q ∈ Q and r j → 0 2000 Mathematics Subject Classification. Primary 35R35, Secondary 35B40, 35J60. Key words and phrases. Free boundary, regularity of the singular set, unique tangent cones, partial regularity. H. Shahgholian has been supported in part by the Swedish Research Council. Both J. Andersson and G.S. Weiss thank the Göran Gustafsson Foundation for visiting appointments to KTH. where Q is the matrix group of rotations of R n . It was shown in [14] or [3] that if y ∈ S(u) then lim rj →0 u(r j x + y) u(r j x + y) L 2 (B1(0)) ∈ P 2 if the right hand side is defined, here P 2 is the set of homogeneous second order harmonic polynomials of degree 2. Since the only homogeneous second order harmonic polynomial, up to translations, rotations and multiplicative constants, in R 2 is x 2 1 − x 2 2 it follows that S n−2 singles out the singular points with co-dimension 2 singularities. In [4] two of the authors showed rigorously that singular points exists, that is there exist a solution u to (1.1) such that S(u) = ∅. This investigation was followed by the authors in [2] and [3] where we investigated and provided a total classification of singular points in R 2 and R 3 respectively. In this paper we intend to prove that in R n the singular points of smallest codimension are locally contained in a C 1 −manifold of dimension n − 2 and that the free boundary Γ u , defined consists of two C 1 manifolds of dimension n − 1 intersecting orthogonally at such singular points. Our main theorem is Theorem 1.2. Let u be a solution to (1.1) and assume that for some sequence r j → 0 (In particular, 0 ∈ S n−2 (u)). Then and for each η > 0 there exists an r η > 0 such that consists of two C 1 manifolds intersecting at right angles at the origin. Furthermore there is a constant r 0 (u) > 0 such that the set We would like to place this result in a long tradition of regularity result for parametric non-linear PDE. In particular we may view the free boundary Γ u = {x ∈ B 1 (0); u(x) = 0} as a parametric surface with singular points in S(u). Some of the most famous result in this area are the results by Bombieri, De Giorgi, Giusti and Simmons ( [6], [17]) that states that no minimal cones exists for minimal surfaces in n < 8. We should also mention the result by B. White [18] where uniqueness of tangent cones for 2-dimensional minimal surfaces is proved. From our point of view White's proof is interesting in that he uses a Fourier series expansion in constructing comparison surfaces. However, we work in n−dimensions which means that our Fourier expansions are considerably more subtle and involved than those that appear in [18]. Singularities in parametric problems have appeared in other areas of mathematics as well and our results have some similarities to the theory for harmonic mappings ( [16] for a good overview). One could also mention a certain similarity with the theory of singularities that arise for α-uniform measures [13]. Since the harmonic second order polynomials form a finite dimensional space. The map F is a map between finite dimensional vector spaces. The main difficulty is that F is highly non-linear and we need quite subtle estimates to characterise the map. On the positive side we may write down Π(u, r, 0) explicitly, modulo lower order terms, by means of Theorem 3.5 by Karp and Margulis [12]. The definition of F involves a Fourier series expansion of −χ Π(u,r,0) on the unit sphere. Our main effort will be to estimate the Fourier coefficients in this expansion when Π(u, r, 0)/ sup B1 |Π| ≈ x 2 n−1 − x 2 n . For further details on the idea of the proof we refer the reader to [3]. Background Material and General Strategy. In this section we will state some of the results of [3] and outline our strategy (which is similar to the strategy of [3]). Our starting observation is the following proposition (Proposition 5.1 in [14]) Proposition 3.1. Let u be a solution of (1.1) in B 1 (0) and let us consider a point for each sequence r j → 0 such that the limit exists. The proof is a fairly standard application of a monotonicity formula. If u is a solution to (1.1) then ∆u ∈ L ∞ which directly implies that D 2 u ∈ BM O(B 1/2 (0)) which in particular implies, via the Sobolev inequality, that for r 2 is locally bounded in L 2 and pre-compact. It will be convenient for some calculations later to subtract a harmonic polynomial in (3.4) instead of the polynomial . We make the following definition. Definition 3.2. By Π(u, r, x 0 ) we will denote the projection operator onto P 2 defined as follows: Π(u, r, x 0 ) = τ r p, where τ r ∈ R + and p ∈ P 2 satisfies sup B1 |p| = 1 as well as We will often write Π(u, r) when x 0 is either the origin or given by the context. By definition τ r = sup B1 |Π(u, r)| and p r = Π(u, r)/τ r . . The Lemma is proved for n = 3 in [3] but the proof is the same in arbitrary dimension. This estimate together with (3.5) implies that u(· + x 0 ) = Π(u, r, x 0 )+a lower order perturbation. Using the pre-compactness in for some sequence r j → 0 we may extract a sub-sequence, which we still denote by for some function Z p . It is not difficult to see that Z p is the unique solution to (3.9) . In order to show regularity for the free boundary near a singular point we would have to control the limit If one can show that the limit is unique then it follows that the blow-up The following result, Corollary 7.3 in [3], gives a quantitative measure on how the function Z Π(u,r,0) controls the difference between Π(u, r, 0) and Π(u, r/2, 0). Proposition 3.4. Let u solve (1.1) in B 1 ⊂ R n and assume that sup B1 |u| ≤ M , u(0) = |∇u(0)| = 0, and that for some ρ ≤ ρ 0 and r ≤ r 0 , In order to estimate sup B1(0) |Π(u, r, 0) − Π(u, r/2, 0)| we thus need to be able to calculate Π(Z Π(u,r,0) , 1/2, 0). We will do this with the help of the following theorem from [12] Theorem 3.5. Let σ ∈ L ∞ (R n ) be homogeneous of zeroth order, that is σ(x) = σ(rx) for all r > 0. Assume that σ has the Fourier series expansion on the unit sphere, where σ i is a homogeneous harmonic polynomial of order i. Moreover assume that ∆Z = σ and that Our strategy in the rest of the paper will be to use Theorem 3.5 to calculate where σ 2 is the second order term in the Fourier series expansion Using the expression (3.10) in Proposition 3.4 will give us enough information to deduce that the blow-up of u is unique at all points x 0 ∈ S n−2 (u). Estimates of the Projections. In order to estimate Π(Z p , 1/2) we need to calculate a 2 σ 2 from Theorem 3.5. That involves calculating the second order Fourier coefficients for −χ {p δ >0} on the unit sphere. To that end we choose nx 2 i − |x| 2 for i = 1, ..., n and x i x j for i = j as a basis for the second order harmonic polynomials. We may choose coordinates so that where δ = (δ 1 , δ 2 , ..., δ n−2 ) andδ = n−2 i=1 δ i . We also define the polynomial p δ , for a given vector δ ∈ R n−2 in equation (4.11). We will assume, for definiteness that δ ≥ 0 (this is implicit in the definition of p δ in equation (4.11)). Ifδ < 0 then all the following arguments follows through with minor and trivial changes. It follows from symmetry (i.e. −χ {p δ >0} is even and the x i x j 's are odd on the unit sphere) that the Fourier coefficient of x i x j is zero. Since we are only interested in points for some sequence r j → 0, we may assume that |δ| << 1. We also denote by B i (δ) the following integral Here dA is the surface element. It follows that the Fourier coefficient of Using that Π(Z p δ , 1) = 0 by definition and Theorem 3.5 we may deduce that (4.14) . With this parametrisation an area element on the unit sphere becomes . . . In order to estimate B i we will use the identity in (4.17) to write, with k = n, We will need some further simplifications where S n−1,n (ψ n−2 , ψ n−1 ) = sin n−1 (ψ n−2 ) sin n (ψ n−1 ) , . To estimate I 2,i (δ, µ) we notice that . By our choice of polar coordinates we have that when ψ n−1 ∈ (0, π/2 − µ) then This means that the gradient of p δ is bounded from below by a constant times µ on its zero level set. It is therefore very easy to estimate I 2,i (δ, µ) by means of the co-area formula. By the co-area formula it follows that for t ∈ (0, 1) and with the notation q δ = n−2 In particular We need to work a little harder in order to estimate I 1 (δ, µ). We begin to prove a simple lemma that will allow us to do some integrations explicitly module O(|δ|)terms. Then there exist a constant c > 0 such that Proof: It is trivial that 1 − cµ ≤ sin(ψ n−3 ) ≤ 1 and that 1 − cµ ≤ sin(ψ n−3 ) ≤ 1. Therefore Use the change of variables it is in this change of variables that we use the rather awkward definition of A(µ) in order to get a nice area of integration to the right. And we will assume that κ > 0, if κ = 0 then the argument is simple and the case κ < 0 is treated analogously. Notice that For i = 1, ..., n − 2 we may write (4.26) as where we have used the identity to evaluate the integral. For i = n − 1 we can calculate Finally, for i = n we get Proposition 4.3. If |δ| is small enough and C i (δ) is defined according to then there exists a universal constant c such that Proof: In (4.19) we showed that we can write We also showed, (4.21), that Also in (4.19) we showed that we can write Furthermore we showed, in Lemmas 4.1 and 4.2, that the inner integral in (4.27) satisfies for (x 1 , ..., x n−2 ) ∈ ∂B n−2 To finish the proof we notice that The proposition follows for µ small enough if |δ| << µ. The final statement follows easily since λ 1 > 0. Proof of The Main Theorem. In this section we prove Theorem 1.2. Remark: If n i=1 C i (δ(r)) > 0 a similar result holds and the proof goes through with trivial changes. Proof: From (5.35) and (4.14) we can conclude that the coefficient of the x 2 j −term in Π(u, r/2, 0) is ). From (5.41) and Proposition 4.3 we have where we used Lemma 4.2 in the first equality and (4.33) in the last equality. Using this and δ j > 0 in (5.43) we can deduce that the Lemma holds if In particular if |δ| is small and (5.41) holds then (5.38) holds if δ j ≥ C γ τ −γ . This is exactly what we wanted to prove. To prove that S n−2 ∩ B r0 (0) is contained in a C 1 manifold of dimension (n−2) for some small r 0 we may proceed as in Theorem 12.2 in [3]. This proves Theorem 1.2.
2012-07-14T07:00:13.000Z
2012-07-14T00:00:00.000
{ "year": 2013, "sha1": "f5802b5adb54d95ab3eb896c02d1dfca7712b0ec", "oa_license": null, "oa_url": "http://www.ems-ph.org/journals/show_pdf.php?iss=1&issn=1120-6330&rank=7&vol=24", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "f5802b5adb54d95ab3eb896c02d1dfca7712b0ec", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
250379953
pes2o/s2orc
v3-fos-license
Non-Invasive Modalities in the Assessment of Vulnerable Coronary Atherosclerotic Plaques Coronary atherosclerosis is a complex, multistep process that may lead to critical complications upon progression, revolving around plaque disruption through either rupture or erosion. Several high-risk features are associated with plaque vulnerability and may add incremental prognostic information. Although invasive imaging modalities such as optical coherence tomography or intravascular ultrasound are considered to be the gold standard in the assessment of vulnerable coronary atherosclerotic plaques (VCAPs), contemporary evidence suggests a potential role for non-invasive methods in this context. Biomarkers associated with deleterious pathophysiologic pathways, including inflammation and extracellular matrix degradation, have been correlated with VCAP characteristics and adverse prognosis. However, coronary computed tomography (CT) angiography has been the most extensively investigated technique, significantly correlating with invasive method-derived VCAP features. The estimation of perivascular fat attenuation as well as radiomic-based approaches represent additional concepts that may add incremental information. Cardiac magnetic resonance imaging (MRI) has also been evaluated in clinical studies, with promising results through the various image sequences that have been tested. As far as nuclear cardiology is concerned, the implementation of positron emission tomography in the VCAP assessment currently faces several limitations with the myocardial uptake of the radiotracer in cases of fluorodeoxyglucose use, as well as with motion correction. Moreover, the search for the ideal radiotracer and the most adequate combination (CT or MRI) is still ongoing. With a look to the future, the possible combination of imaging and circulating inflammatory and extracellular matrix degradation biomarkers in diagnostic and prognostic algorithms may represent the essential next step for the assessment of high-risk individuals. Introduction As a leading cause of morbidity and mortality, atherosclerotic cardiovascular diseases are constantly at the forefront of scientific research to further unveil their pathophysiologic basis and to develop appropriate diagnostic and therapeutic approaches. Undoubtedly, the development of a plaque is the critical complication of atherosclerosis and the investigation of its features may guide a tailored management of patients with atherosclerotic cardiovascular diseases. In the context of coronary artery disease (CAD), significant progress has been made towards the assessment of coronary atherosclerotic plaques, since several characteristics have been identified as high-risk for rupture or erosion, leading to the development of acute coronary syndromes (ACS). According to these properties, the plaques which are prone to rupture or erosion may be defined as vulnerable plaques. Although invasive imaging methods remain the gold-standard in the evaluation of vulnerable plaques, non-invasive modalities may be critical in the early detection of those abnormalities. In this review article, we summarize the latest advances in the non-invasive assessment of vulnerable coronary atherosclerotic plaques (VCAP). Features of the Vulnerable Coronary Atherosclerotic Plaque The process of atherosclerosis is complex, consisting of multiple steps. The invasion of low-density lipoprotein (LDL) molecules in the subendothelial space through the aid of extracellular matrix proteoglycans and the ensuing LDL oxidation consist of the pivotal initial step [1]. Endothelial dysfunction and permeability are key factors involved in the accumulation of the large LDL particles [2]. Traditional cardiovascular risk factors are implicated in the progression of endothelial dysfunction [2]. Following endothelial activation, an array of molecules (selectins, adhesion molecules) facilitate leukocyte rolling, adherence, and penetration in the subintimal space [3]. Those leukocytes are then differentiated into macrophages and engulf oxidized LDL, transforming into foam cells due to the presence of esterified cholesterol in lipid droplets [4]. Consequently, noxious inflammatory and oxidative responses arise [5,6], together with the activation and proliferation of vascular smooth muscle cells (VSMCs) in the media layer [7]. VSMCs in particular may also display phagocytic actions through the uptake of oxidized LDL, given that they are major contributors in atherosclerotic plaques [8,9]. Following their establishment, coronary atherosclerotic plaques progress through the continuous accumulation of lipids, the proliferation of VSMCs, and the decreased synthesis and increased degradation of collagen ( Figure 1). The thinning of the fibrous cap follows, together with the apoptosis and defective efferocytosis, which contributes to the formation of a rich lipid core. Such plaques may also be termed as a thin-cap fibroatheroma (TCFA) [10]. Moreover, areas of calcification may also develop due to increased calcium deposition and decreased clearance. Spotty or microcalcification may lead to plaque instability [11]. Additionally, positive remodeling and plaque growth will promote neoangiogenesis, with the newly formed microvessels that stem from the vasa vasorum being another point of concern due to their ability to cause intraplaque hemorrhage [12]. Tomography 2022, 8, FOR PEER REVIEW 2 properties, the plaques which are prone to rupture or erosion may be defined as vulnerable plaques. Although invasive imaging methods remain the gold-standard in the evaluation of vulnerable plaques, non-invasive modalities may be critical in the early detection of those abnormalities. In this review article, we summarize the latest advances in the noninvasive assessment of vulnerable coronary atherosclerotic plaques (VCAP). Features of the Vulnerable Coronary Atherosclerotic Plaque The process of atherosclerosis is complex, consisting of multiple steps. The invasion of low-density lipoprotein (LDL) molecules in the subendothelial space through the aid of extracellular matrix proteoglycans and the ensuing LDL oxidation consist of the pivotal initial step [1]. Endothelial dysfunction and permeability are key factors involved in the accumulation of the large LDL particles [2]. Traditional cardiovascular risk factors are implicated in the progression of endothelial dysfunction [2]. Following endothelial activation, an array of molecules (selectins, adhesion molecules) facilitate leukocyte rolling, adherence, and penetration in the subintimal space [3]. Those leukocytes are then differentiated into macrophages and engulf oxidized LDL, transforming into foam cells due to the presence of esterified cholesterol in lipid droplets [4]. Consequently, noxious inflammatory and oxidative responses arise [5,6], together with the activation and proliferation of vascular smooth muscle cells (VSMCs) in the media layer [7]. VSMCs in particular may also display phagocytic actions through the uptake of oxidized LDL, given that they are major contributors in atherosclerotic plaques [8,9]. Following their establishment, coronary atherosclerotic plaques progress through the continuous accumulation of lipids, the proliferation of VSMCs, and the decreased synthesis and increased degradation of collagen ( Figure 1). The thinning of the fibrous cap follows, together with the apoptosis and defective efferocytosis, which contributes to the formation of a rich lipid core. Such plaques may also be termed as a thin-cap fibroatheroma (TCFA) [10]. Moreover, areas of calcification may also develop due to increased calcium deposition and decreased clearance. Spotty or microcalcification may lead to plaque instability [11]. Additionally, positive remodeling and plaque growth will promote neoangiogenesis, with the newly formed microvessels that stem from the vasa vasorum being another point of concern due to their ability to cause intraplaque hemorrhage [12]. The characteristics of vulnerable plaque. A lipid-rich necrotic core, which is the outcome of macrophage multiplication and engulfment of LDL together with VSMC multiplication and differentiation into macrophages, may be detected. Moreover, decreased collagen synthesis and enhanced collagen degradation from the action of interferon-γ and MMPs, respectively, lead to the thinning of the fibrous cap. Other deleterious processes may also occur, such as the defective efferocytosis of cells, the spotty microcalcification as a result of inflammation and reduced collagen synthesis, and the vasa vasorum-derived neoangiogenesis. LDL: low-density lipoprotein, MMP: matrix metalloproteinase, MP: macrophage, TH1: type 1 T helper cell, VSMC: vascular smooth muscle cell. Clinical Significance of Vulnerable Coronary Atherosclerotic Plaque The correlation of high-risk features of VCAP with clinical events has been thoroughly described. In a landmark prospective study of 697 ACS survivors that underwent percutaneous coronary intervention (PCI) with intravascular ultrasound (IVUS), TCFAs were predictive of major adverse cardiovascular events (MACE) related to non-culprit lesions (HR 3.35, 95% CI 1.77-6.36, p < 0.001), together with plaque burden exceeding 70% and minimal luminal area ≤4 mm 2 [13]. The prevalence TCFAs was significantly higher in patients with ruptured culprit coronary plaques compared to eroded ones, while the nonculprit lesions in patients with ruptured culprits were also characterized by TCFAs [14]. For ST-elevation myocardial infarction (STEMI) in particular, the non-culprit obstructive lesions consist of a TCFA in half of the cases, together with other vulnerability features [15]. Culprit ruptured plaques in individuals with STEMI possess more vulnerability features compared to culprit eroded plaques, possibly explaining the variability of clinical outcomes of these two distinct morphologies [16]. When comparing patients with STEMI and non-STEMI (NSTEMI), vulnerability features (microvessels, calcification, TCFAs) were more common in culprit and non-culprit lesions of STEMI patients [17]. Culprit TCFAs were independent predictors of STEMI and non-culprit TCFAs were associated with the incidence of MACE at the two-year follow-up [17]. In the group of patients with diabetes mellitus with optical coherence tomography (OCT)-derived fractional flow reserve-negative lesions, TCFAs were identified in 25% of the study population and were the strongest predictor of incident MACE consisting of cardiac mortality, target vessel myocardial infarction, clinically driven target lesion revascularization or unstable angina requiring hospitalization at 18 months (HR 5.12, 95%CI 2.1-12.3, p < 0.001) [18]. Recognition of the vulnerable plaque morphology has prompted research towards the interventional management of such lesions. It should be initially mentioned that complete revascularization was superior to culprit-only revascularization regarding the reduction of future adverse cardiovascular events in patients with STEMI in the Complete versus Culprit-Only Revascularization Strategies to Treat Multivessel Disease after Early PCI for STEMI (COMPLETE) trial [19]. A randomized control trial of patients with vulnerable non-obstructive lesions treated with either a bioresorbable vascular scaffold plus optimal medical therapy or optimal medical therapy alone found superior efficacy and similar safety of the intervention at a median follow-up of 4.1 years [20]. Ongoing studies should further clarify the importance of vulnerable plaque-guided percutaneous coronary intervention. Non-Invasive Assessment of the Vulnerable Coronary Atherosclerotic Plaque Although OCT and IVUS remain the gold-standard in the assessment of VCAPs, their invasive nature mandates the development of non-invasive modalities, including circulating biomarkers and imaging methods, which can promptly and accurately evaluate the presence of high-risk features and potentially aid the risk stratification and management of high-risk patients ( Figure 2). Circulating Biomarkers Inflammation represents a cardinal feature of atherosclerosis [4] and, unavoidably, several inflammatory biomarkers have been assessed regarding plaque vulnerability (Tables 1 and 2). Starting with the most studied inflammatory marker, C-reactive protein (CRP), its high levels were associated with the presence and the burden of TCFAs in patients with an ACS [21][22][23], as well as with plaque rupture [24]. The presence of increased CRP and high-risk features in OCT of patients hospitalized for an ACS may be an important prognostic clue for subsequent events [25]. However, other studies have found no association of CRP with lipid-rich plaques or with TCFAs [26,27], with the study of Koga et al. additionally pointing towards the association of pentraxin-3, another marker indicative of inflammation, with TCFAs instead of CRP [27]. A recent study further confirmed this hypothesis, with post-PCI pentraxin-3 being inversely correlated with fibrous cap thickness and positively correlated with lipid core length [28]. Critically, post-PCI pentraxin-3 values ≥ 4.08 ng/mL were identified as independent predictors of incident MACE [28]. Patients with STEMI and elevated pentraxin-3, together with plaque rupture or erosion, were at increased risk for future MACE [29]. Table 1. Comparison of non-invasive modalities for vulnerable plaque assessment. CCTA cMRI PET Vulnerable plaque characteristics Circulating Biomarkers Inflammation represents a cardinal feature of atherosclerosis [4] and, unavoidably, several inflammatory biomarkers have been assessed regarding plaque vulnerability (Tables 1 and 2). Starting with the most studied inflammatory marker, C-reactive protein (CRP), its high levels were associated with the presence and the burden of TCFAs in patients with an ACS [21][22][23], as well as with plaque rupture [24]. The presence of increased CRP and high-risk features in OCT of patients hospitalized for an ACS may be an important prognostic clue for subsequent events [25]. However, other studies have found no association of CRP with lipid-rich plaques or with TCFAs [26,27], with the study of Koga et al. additionally pointing towards the association of pentraxin-3, another marker indicative of inflammation, with TCFAs instead of CRP [27]. A recent study further confirmed this hypothesis, with post-PCI pentraxin-3 being inversely correlated with fibrous cap thickness and positively correlated with lipid core length [28]. Critically, post-PCI pentraxin-3 values ≥ 4.08 ng/mL were identified as independent predictors of incident MACE [28]. Patients with STEMI and elevated pentraxin-3, together with plaque rupture or erosion, were at increased risk for future MACE [29]. Anaphylactic reaction CCTA: coronary CT angiography, CI-AKI: contrast-induced acute kidney injury, CMA: coronary microcalcification activity, cMRI: cardiac magnetic resonance imaging, CNR: contrast-to-noise ratio, FAI: fat attenuation index, FDG: fluorodeoxyglucose, hsCRP: high sensitivity CRP, IVCM: intravenous contrast medium, IVUS: intravascular ultrasound, MMP-9: matrix metalloproteinase-9, OCT: optical coherence tomography, PET: positron emission tomography, PMR: plaque-to-myocardium ratio, PTX-3: pentraxin-3, TBR: target-to-background ratio. ↑ indicates increased, ↓ indicates decreased, number of + indicates the strength of the correlation. Extracellular matrix (ECM) degradation is crucial in the formation of vulnerable plaques [30]. As a result, ECM biomarkers have been tested in this domain (Tables 1 and 2). Among them, matrix metalloproteinase (MMP)-9 has been associated with the presence of TCFAs in the culprit lesion of patients with an ACS, with an area under the receiver operating characteristic curve (AUROC) of 0.83 and an optimal cutoff of 9.9 ng/mL [31]. MMP-9 levels ≥ 65.5 ng/mL were associated with ruptured plaques in patients with ACS [32]. In stable CAD patients with elevated lipoprotein (a), MMP-9 was independently associated with VCAPs [33]. Dynamic changes in MMP-9 have also been studied. A significant elevation in plasma MMP-9 is noted soon after plaque disruption in patients undergoing PCI, and is higher for those with an ACS or with lipid-enriched plaques [34]. While it appears that the role of biomarkers in the assessment of VCAPs is limited to date (Tables 1 and 2), their combination into models might be of use. As shown in the study of Kook et al., such a model consisting of soluble lectin-like oxidized low-density lipoprotein receptor-1, MMP-9, white blood cell count, and the peak creatine kinase-myocardial band had decent AUROC (0.84), sensitivity (62.2%), and specificity (97.6%) for identification of plaque rupture in 85 patients with ACS, at a cutoff of 0.614 [35]. Furthermore, the addition of inflammatory biomarkers on top of imaging features of high-risk plaques may enhance the risk stratification for incident MACE [36]. Adequately sized clinical trials should be designed to assess the importance of combining circulating biomarkers with imaging features for the detection of VCAPs. Moreover, microRNAs are being investigated in the management of atherosclerotic diseases [37,38], and preliminary results have associated their levels with vulnerable plaque characteristics [39]. Computed Tomography Coronary Angiography The use of multi-slice computed tomography (MSCT) for the identification of vulnerable plaque characteristics has been investigated thoroughly (Tables 1 and 2, Figure 3). Initially, in the study of Pundziute et al., on 50 patients with stable CAD or ACS, the presence of a non-calcified or mixed plaque was a predominant finding in ACS compared to calcified plaques in stable CAD [40]. Interestingly, plaques characterized as TCFAs in IVUS were usually of mixed morphology in MSCT [40]. Similar results have been reported previously, with the presence of high-risk features (positive remodeling, spotty calcification, non-calcified plaques) being associated with ACS compared to stable CAD [41]. Small spotty plaque calcifications identified through coronary CT angiography (CCTA) were also correlated to the percentage of necrotic core and the prevalence of TCFA as assessed with IVUS [42]. Small spotty plaque calcifications identified through coronary CT angiography (CCTA) were also correlated to the percentage of necrotic core and the prevalence of TCFA as assessed with IVUS [42]. Although a more precise characterization of plaque composition can be achieved with IVUS, a good correlation between the CT plaque classification and the IVUS-derived plaque composition has been noted [43]. When MSCT was compared to OCT in patients with ACS or stable CAD, important observations were reported. OCT-detected TCFAs in culprit lesions had a greater degree of positive remodeling and a lower attenuation value compared to non-TCFA culprit lesions [44]. Moreover, a ring-like enhancement in CT (plaque core with low CT attenuation surrounded by a rim-like area of higher CT attenuation) was common in TCFA, but with limited diagnostic accuracy (sensitivity: 44%, specificity 96%), however [44]. In the study of Ito et al., the assessment of coronary atherosclerotic plaques in 81 patients with clinically suspected CAD through OCT and MSCT demonstrated that an attenuation value of ≤62.4 Hounsfield Units (HU), a remodeling index (ratio of the outer cross-sectional vessel area at the site of the plaque divided by the outer area at the proximal reference site) ≥1.08, and a signet ring-like enhancement were independent predictors of OCT-defined TCFA in the multivariate analysis [45]. The diagnostic accuracy of plaque attenuation was the highest, with an AUROC of 0.859 [45]. The previously mentioned elements, together with the napkin-ring sign, were predictive of Although a more precise characterization of plaque composition can be achieved with IVUS, a good correlation between the CT plaque classification and the IVUS-derived plaque composition has been noted [43]. When MSCT was compared to OCT in patients with ACS or stable CAD, important observations were reported. OCT-detected TCFAs in culprit lesions had a greater degree of positive remodeling and a lower attenuation value compared to non-TCFA culprit lesions [44]. Moreover, a ring-like enhancement in CT (plaque core with low CT attenuation surrounded by a rim-like area of higher CT attenuation) was common in TCFA, but with limited diagnostic accuracy (sensitivity: 44%, specificity 96%), however [44]. In the study of Ito et al., the assessment of coronary atherosclerotic plaques in 81 patients with clinically suspected CAD through OCT and MSCT demonstrated that an attenuation value of ≤62.4 Hounsfield Units (HU), a remodeling index (ratio of the outer cross-sectional vessel area at the site of the plaque divided by the outer area at the proximal reference site) ≥1.08, and a signet ring-like enhancement were independent predictors of OCT-defined TCFA in the multivariate analysis [45]. The diagnostic accuracy of plaque attenuation was the highest, with an AUROC of 0.859 [45]. The previously mentioned elements, together with the napkin-ring sign, were predictive of incident ACS in the study of Otsuka et al. [46]. In the study of Tomizawa et al., the investigators suggested that the lowattenuation plaque volume and remodeling index should be used as continuous values in conjunction with the napkin-ring sign in order to increase overall sensitivity and specificity to 94% and 91%, respectively [47]. Contrast/plaque attenuation ratios, created from CCTA for the characterization of each plaque component, were significantly correlated with IVUSdetermined plaque component volumes [48]. A high necrotic core/fibrous plaque ratio may be related to IVUS-derived TCFA [49]. Increased epicardial fat volume and density have also been recognized as independent predictors of TCFAs [50,51]. Dual-source CT (DSCT) represents another non-invasive method of atherosclerotic plaque evaluation through the simultaneous image capture of two X-ray systems. As a result, enhanced temporal resolution and speed of acquisition can be achieved when paired with a significantly reduced radiation dose. Concerning VCAPs, they are associated with low CT values, with large cross-sectional plaque and lipid core areas. The differentiating ability of DSCT remains inadequate, however, with sensitivity and specificity of 73.1% and 94% in detecting TCFA, respectively [52]. A low-attenuation plaque volume greater than 8 mm 2 , derived from DSCT, had a remarkable diagnostic potential regarding IVUS-defined TCFA, with accuracy, sensitivity, and specificity of 91%, 84.6%, and 96.8%, respectively [53]. The imaging of coronary perivascular adipose tissue (PVAT) with the so-called perivascular fat attenuation index (FAI) through CCTA deserves an honorable mention (Tables 1 and 2, Figure 4). Perivascular FAI assesses adipocyte lipid content and size, indicative of vascular inflammation, with close correlation to inflammation detected by PET [54]. As a result, coronary inflammation and subclinical CAD may be identified. Perivascular FAI was associated with non-calcified atherosclerotic plaques and was increased in culprit lesions of patients with ACS [54]. These observations led to the hypothesis that the early detection of such lesions may be essential in identifying vulnerable plaques in vulnerable patients. This hypothesis was tested in the Cardiovascular RISk Prediction using Computed Tomography (CRISP-CT) study, involving 1872 and 2040 participants in the derivation and validation cohorts, respectively [55]. The perivascular FAI around the right coronary artery, with a cutoff of ≥−70.1 HU, was found to be predictive of all-cause (adjusted HR: 2.55, 95% CI 1.65-3.92, p < 0.001) and cardiac mortality (adjusted HR: 9.04, 95%CI 3.35-24.40, p < 0.001) and was, therefore, selected as the marker of coronary inflammation [55]. The results were confirmed in the validation cohort. Importantly, the addition of high perivascular FAI to a risk prediction model consisting of age, sex, cardiovascular risk factors (hypertension, hypercholesterolaemia, diabetes, smoking, and adipose tissue volume), the extent of coronary artery disease (modified Duke coronary artery disease index), and the number of high-risk plaque features added significant incremental prognostic value for all-cause (∆ AUC : 0.042, p = 0.0083) and cardiac mortality (∆ AUC : 0.075, p = 0.0069) [55]. The new predictive model incorporating perivascular FAI at the optimal cutoff was also more efficient in the classification of patients, as it was highly specific with excellent negative predictive value [55]. It should be noted that perivascular FAI was associated with clinical endpoints both for primary and secondary prevention across all of the examined subgroups [55]. In a post-hoc analysis assessing traditional high-risk plaque features (at least one of positive remodeling, low-attenuation plaque, spotty calcification, or napkin-ring sign) with FAI at the previously proposed cutoff, the presence of both high-risk plaque features and high FAI was associated with a 7.3-fold higher risk of cardiac death after adjustment for several factors compared, even when compared to the presence of high-risk plaque features alone [56]. The observations were similar, albeit attenuated, when FAI was assessed at the left anterior descending artery with a cutoff of ≥−79.1 HU [56]. Statin treatment has been found to decrease the perivascular FAI in high-risk lesions, representing an appealing approach for the monitoring of patient response and the assessment of the residual risk [57]. Perivascular FAI may also help discriminate the atherosclerotic changes with other inflammatory diseases such as myocarditis, which may present similarly to an ACS. However, perivascular FAI values are lower in the case of myocarditis compared to atherosclerosis, as recently demonstrated by Baritussio et al. [58]. Moreover, its use in chronic autoimmune inflammatory diseases needs to be elucidated further, since patients with psoriasis had significantly lower vascular inflammation assessed by perivascular FAI [59], opposed to the common belief that chronic low-grade inflammation in such pathologic states leads to a greater extent of inflammatory atherosclerotic changes. Although no differences were noted regarding the use of biologic therapy or statins in this study [59], Elnabawi et al. had previously shown a decrease in perivascular FAI following the use of biologic therapy in patients with moderate-to-severe psoriasis [60]. CCTA radiomics may be an important next step in advancing the recognition of vulnerable plaques via CT (Tables 1 and 2), being superior in diagnostic accuracy compared with the conventional high-risk plaque features from IVUS, OCT, or positron emission tomography (PET) [61][62][63]. Radiomic profiling of PVAT remodeling alterations has also been investigated. Adipose tissue wavelet-transformed mean attenuation was sensitive in detecting PVAT inflammation, while features of radiomic texture were related to PVAT Perivascular FAI may also help discriminate the atherosclerotic changes with other inflammatory diseases such as myocarditis, which may present similarly to an ACS. However, perivascular FAI values are lower in the case of myocarditis compared to atherosclerosis, as recently demonstrated by Baritussio et al. [58]. Moreover, its use in chronic autoimmune inflammatory diseases needs to be elucidated further, since patients with psoriasis had significantly lower vascular inflammation assessed by perivascular FAI [59], opposed to the common belief that chronic low-grade inflammation in such pathologic states leads to a greater extent of inflammatory atherosclerotic changes. Although no differences were noted regarding the use of biologic therapy or statins in this study [59], Elnabawi et al. had previously shown a decrease in perivascular FAI following the use of biologic therapy in patients with moderate-to-severe psoriasis [60]. CCTA radiomics may be an important next step in advancing the recognition of vulnerable plaques via CT (Tables 1 and 2), being superior in diagnostic accuracy compared with the conventional high-risk plaque features from IVUS, OCT, or positron emission tomography (PET) [61][62][63]. Radiomic profiling of PVAT remodeling alterations has also been investigated. Adipose tissue wavelet-transformed mean attenuation was sensitive in detecting PVAT inflammation, while features of radiomic texture were related to PVAT fibrosis and vascularity [64]. As far as their relationship with MACE is concerned, the development of a machine learning algorithm consisting of the fat radiomic profile was derived from a training cohort and then validated in 1575 individuals of the Scottish Computed Tomography of the Heart (SCOT-HEART) trial, improving prediction of incident MACE compared to standard features assessed by CCTA (∆ C-statistic : 0.126, p < 0.001) [64]. Moreover, the fat radiomic profile was increased in ACS patients compared to matched controls and it remained unchanged after a six-month follow-up, possibly indicating permanent changes in PVAT [64]. Ultimately, the recently developed CaRi-Heart ® device, incorporating the evidence from the previously mentioned studies, drastically improved the risk stratification of patients compared to conventional models (∆ C-statistic : 0.149, p < 0.001 in the validation cohort) [65]. Magnetic Resonance Imaging Despite the fact that cardiac magnetic resonance imaging (cMRI) is not widely adopted in the evaluation of VCAP, considerable scientific research has been performed in this domain. Early studies have shown the MRI-assessed area of plaque tissue components (lipid-rich necrotic core, calcium) correlated with the histopathologic evaluation. Importantly, those two components could be reliably differentiated from fibrous tissue. The histopathologically defined vulnerable plaque was associated with a large lipid area and reduced minimal fibrous cap thickness in MRI [66]. The local stress/strain pattern in areas of TCFAs was proposed as another index of MRI-defined plaque vulnerability [67]. Plaque wall stress was assessed by Huang et al. using ex-vivo MRI in coronary plaques of 12 deceased patients with the use of three-dimensional fluid-structure interaction models, thus calculating the critical plaque wall stress [68]. This parameter was significantly increased in patients that died from CAD-related causes compared to the control group, while the plaque burden did not differ significantly [68]. Segmental pericoronary epicardial adipose tissue volume, quantified by cMRI, has been associated with CT-derived vulnerability features such as low attenuation and non-calcified or mixed morphology [69]. Moving to non-contrast T1-weighted images, in a prospective study of 568 patients with suspected or known CAD, the presence of high-intensity plaques [plaque-to-myocardium signal intensity ratio (PMR) ≥ 1.4] together with a history of CAD was independently associated with incident coronary events (HR: 3.96; 95% CI: 1.92-8.17, p < 0.001) [70]. Utilizing this approach in 77 patients with stable CAD undergoing PCI, Hoshi et al. correlated high-intensity plaques based on the above-mentioned cutoff to IVUS-derived characteristics of vulnerable plaques [71]. The presence of a high-intensity plaques was also associated with periprocedural myocardial injury [71]. In another study, high-intensity signal plaques with PMR > 1 were characterized based on their location as intrawall or intraluminal, which had important morphological implications [72]. Specifically, intrawall high-intensity signal plaques had macrophage accumulation in the absence of calcifications, whereas intraluminal plaques were more commonly met with thrombi and intimal microvessels [72]. Compared to OCT, PMR as a continuous variable was linearly correlated with the number of high-risk plaque features of the culprit lesion [73]. Among those high-risk features, noncalcified plaque, thrombus, and intimal vasculature were independently associated with PMR [73]. Intensive 12-month statin therapy led to the reduction of PMR in high-intensity plaques [74], thus providing an additional role in the monitoring of patients. As far as contrast-enhanced cMRI is concerned, early contrast enhancement of a coronary plaque may also be a sign of vulnerability, as it is more frequently encountered in cases of unstable angina pectoris compared to patients with stable CAD [75]. Moreover, delayed contrast enhancement with the use of contrast-to-noise ratio (CNR) was significantly higher in culprit lesions compared to non-culprit lesions [76]. Gadofosveset-enhanced cMRI (GE-cMRI) could identify and exclude culprit lesions in ACS or CAD patients (sensitivity: 82%, specificity: 83%), while the areas where TCFAs were detected through OCT were characterized by increased CNR [77]. When comparing GE-cMRI with T1-weighted cMRI in patients with clinical suspicion of CAD, hemodynamically significant lesions with a quantitative flow reserve < 0.8 had higher CNR lesion only in GE-cMRI [78]. Nuclear Imaging The increasing frequency in the use of nuclear imaging techniques in cardiovascular diseases has not spared the assessment of VCAP. Initial studies were conducted in cancer patients, with 18-fluorodeoxyglucose (FDG) PET/CT detecting significant correlations of target-to-background ratio (TBR) in the region of the left anterior descending artery with cardiovascular risk factors, pericardial fat volume, and calcified plaque burden [79]. However, myocardial uptake of FDG limited its applicability in the entire patient population (Tables 1 and 2) [79]. Myocardial FDG uptake could be diminished through consumption of a low carbohydrate, high-fat meal the night before the procedure [80], a finding which was confirmed in a randomized trial [81]. However, patients with diabetes mellitus should be handled with caution, since regular dietary recommendations are usually ineffective, as these patients may not be able to produce adequate insulin in response to glucose loading. Therefore, techniques such as the euglycemic-hyperinsulinemic clamp should be applied for adequate image quality and results [82]. 18-sodium fluoride (NaF) is an alternative tracer that has been used, and is indicative of calcification and macrophage activity. As there is limited myocardial uptake through the use of this tracer, motion correction techniques have been additionally applied to enhance coronary artery plaque visualization, with an encouraging 46% reduction of image noise being achieved [83]. Triple-gated corrections may further augment the reproducibility of the examination [84]. As far as atherosclerotic plaque assessment is concerned, NaF uptake was increased in patients with CAD and correlated with calcium score, while the 18-FDG uptake did not differ according to CAD status [85]. A few vulnerable plaques were detected in diabetics without known CAD, with a TBR cutoff ≥ 1.5 [86]. The prevalence of fluoride-positive plaques was higher in patients after an ACS compared to stable CAD in another study [87]. Other than TBR, the efficiency of coronary microcalcification activity (CMA) across the entire coronary circulation has been tested in patients with recent ACS and multivessel CAD [88]. Both CMA and TBR were increased in low-attenuation plaques compared to the rest, but a CMA threshold >0 was superior in detecting the low-attenuation plaques compared to a TBR > 1.25, with remarkable sensitivity and specificity (93.1% and 95.7%, respectively) [88]. Using tracers that target specific plaque components is also being investigated. Interest has been shown towards (68)Ga-DOTATATE, a tracer that binds to somatostatin receptor 2 that is expressed in macrophages. In patients with neuroendocrine tumors, the use of this tracer in PET/CT demonstrated significantly higher TBR in atherosclerotic plaques compared to normal coronary arteries [89]. Through the use of a tracer that targets vascular cell adhesion molecule-1, in-vivo PET/CT imaging of the aorta in murine models was successful in diagnosing atherosclerotic lesions and their extent [90]. Furthermore, utilization of a selective radiotracer for MMP-13 led to superior identification of plaques with MMP-13 expression, indicative of extracellular matrix remodeling and, thus, potentially vulnerable [91]. Additionally, (68)Ga-pentixafor is known for its binding ability to the CXCmotif chemokine receptor 4 (CXCR4), which is implicated in atherosclerosis. Following PET/CT imaging with this radiotracer, an increased uptake was noted in calcified plaques and in patients with an increasing number of cardiovascular risk factors [92]. Lastly, the use of a novel glycoprotein IIb/IIIa-receptor radiotracer, 18F-GP1, has been recently evaluated in 44 patients after myocardial infarction. Culprit vessels had higher uptake of the radiotracer compared to non-culprits, whose uptake was similar to that of controls. The optimal cutoff of the maximum TBR for the culprit vessel was reported at 1.20 (Sensitivity: 60%, Specificity: 97%) [93]. Even though PET/CT has received most of the attention, hybrid PET/MRI imaging methods may be considered as an option, even though their use has been mostly experimental to this point, aiming at improving image quality [94]. In the only available clinical study (Tables 1 and 2), the use of 18-NaF in gadobutrol-enhanced PET/MRI led to the identification of TCFAs and lipid cores in segments with TBR > 1.28 and >1.25, respectively [95]. Interestingly, CNR was correlated with calcified TCFA in cases of TBR > 1.28 [95]. Awareness in the field of nuclear imaging of atherosclerosis is rapidly growing, with the upcoming clinical studies being eagerly awaited. Finally, it should be stressed that nuclear imaging studies have no relevant contraindications in subjects with chronic kidney disease, in contrast to CCTA and cMRI, where there is a concern of serious renal complications such as CI-AKI and nephrogenic systemic fibrosis (Table 1). Non-Invasive Assessment of VCAP: Current State To conclude, the presented evidence concerning the progress of non-invasive modalities in the assessment of VCAPs indicates the extensive knowledge we have attained with regard to the process of coronary atherosclerosis. Identification of such adverse plaque characteristics may thus represent an appealing option in the holistic management of patients with CAD by providing incremental prognostic information and tailoring the therapeutic approach. Published studies have shown a good correlation with invasive methods or even plaque histology. On top of that, abnormalities in circulating inflammatory and extracellular matrix degradation biomarkers could indicate an additional risk marker. However, the lack of large-scale, multicenter randomized clinical trials and registries is a deterring factor for the widespread implementation of these modalities in everyday clinical practice. Moreover, uncertainties remain regarding the optimal imaging method of choice, with most data stemming from CCTA studies. Limited evidence is available from nuclear imaging studies, which may also face certain limitations concerning myocardial uptake and motion correction that ought to be resolved. Therefore, upcoming studies should be adequately designed to provide the needed answers in the existing evidence gaps and prove the incremental value of the non-invasive, multimodality assessment of VCAPs.
2022-07-09T15:08:36.334Z
2022-07-06T00:00:00.000
{ "year": 2022, "sha1": "39b1f64632eaa986aba3cad782fc6851be0d5d7e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2379-139X/8/4/147/pdf?version=1657101373", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f5d5104ae72ba46b4a2268bde50b67eafb4b525d", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
128286330
pes2o/s2orc
v3-fos-license
Tests of lepton universality with semi-tauonic b-quark decays Several measurements have shown deviations from lepton universality in tree-level semi-leptonic decays of b hadrons. In these proceedings three LHCb measurements of lepton universality in such decays are presented along with a brief overview for the future possibilities of these and similar observables. Introduction In the Standard Model (SM) the three lepton generations all have the same coupling strength to the electroweak gauge bosons. Such lepton universality can be tested in semi-leptonic decays with observables R(X), typically defined by where H b denotes some decaying b hadron and X is a hadron in the final state. In the case of lepton universality the only deviation of R(X) from unity would be due to the different masses of the leptons leading to different available phase spaces in the decay and varying the impacts of the hadronic form-factors. Several independent measurements have shown deviations from the SM predictions in the observables R(D * ) (B → D * l + ν l ) and R(D) (B → Dl + ν l ) [1-8]. The current experimental average and theoretical prediction are shown in Fig. 1. The combination of the individual measurements can be seen to lie approximately 3.7σ away from the theory expectation. If this is taken to be a sign of physics beyond the SM then such new physics (NP) is competing with SM tree-level diagrams. LHCb has made two measurements of R(D * ) using 3 fb −1 of pp collision data with centreof-mass energies of 7 and 8 TeV collected in 2011 and 2012 during Run 1 of the LHC. A measurement has also been made of a complementary observable R(J/ψ) with the same dataset. These measurements are described in the following sections. 2. Reconstructing semi-tauonic B decays at LHCb τ leptons are short lived so they can only be reconstructed via their decays. At LHCb this is done either with the τ + → µ +ν τ ν µ leptonic mode, or the τ + → π + π − π + (π 0 )ν τ hadronic decay. to give the estimated true B momentum in the z direction. As both the pp interation point and B decay-vertex can be well measured thanks to LHCb's excellent vertex resolution, the B flight path is known and so the B momentum vector can be constructed from p true z (B) and the angle between the B flight vector and the z direction. Using this approximation quantities can be constructed: the square of the invariant mass of the three neutrinos, m 2 miss ; the square of the invariant mass of the lepton system, q 2 ; and the muon energy in the approximated B rest-frame, E * µ . Such variables can be used to discriminate the B 0 → D * − τ + ν τ signal from the backgrounds and from the B 0 → D * − µ + ν µ normalisation mode. For the hadronic decays of the τ + both the B 0 and τ + flight vectors are known from the vertex formed by theD 0 π − of the D * − candidate for the former and the three pion vertex of the latter. This means that the kinematics of τ + and B 0 candidates can be determined up to a two-fold ambiguity. In each case the minimum possible value of each is chosen. Again this approximation allows the kinematics of the decay to be estimated so that the signal can be separated from the backgrounds and normalisation modes. Muonic R(D * ) A measurement of R(D * ) has been made using 3 fb −1 of data collected by LHCb in Run 1 of the LHC using the τ + → µ +ν τ ν µ decay [6]. The observable of interest was extracted by a three-dimensional binned template fit to the three kinematic variables q 2 , m 2 miss and E * µ . Backgrounds arise from particles being mis-identified as a muon, random combinations of D * − and µ + (referred to as combinatorial), B → D * − X c where X c represents a charm hadron that decays semi-leptonically and B → D * − µ + ν µ X, which is dominated by the B → D * * µ + ν µ decays that are not well measured. The templates from the first two components can be taken from data using dedicated samples. For the signal, normalisation and other background components simulation was used. The fitted value of R(D * ) is where the first uncertainty is statistical and the second systematic. Although the result appears to be limited by the systematic uncertainty the dominant contribution is the limited statistics of the simulation samples. This should be readily reducible such that one should expect the precision of R(D * ) to continue improving in the future. Hadronic R(D * ) A further measurement of R(D * ) has been made utilising the hadronic decay mode of the τ [7, 8]. In this case the signal and normalisation final states are not the same. Instead, the B 0 → D * − τ + ν τ branching fraction is normalised to the decay B 0 → D * − π + π + π − and an external measurement of the latter branching fraction is used to extract R(D * ). The sources of backgrounds for the hadronic mode are different to those from the muonic measurement. These include B → D * − π + π + π − X with the pions originating at the B vertex and B → D * − X c with the second charm hadron decaying to at least three pions. The former can be reduced by considering the displacement of the three π vertex from the B vertex. The latter is reduced by means of a Boosted Decision Tree (BDT) discrimant. The signal is extracted by means of a three-dimensional template fit to the variables q 2 , the proper decay-time of the τ + candidate, t τ , and the BDT discriminant. The fit projections are shown in Fig. 2. The extracted value of R(D * ) is where the first and second uncertainties are statistical and systematic. The third uncertainty arises from the B 0 → D * − π + π + π − branching fraction. This result is consistent with both the SM prediction and the experimental average. Again the dominant systematic uncertainty is due to the limited size of the simulation samples. Muonic R(J/ψ) Beyond R(D) and R(D * ) there are other ratios that merit investigation in order to understand the structure of the NP contributions. Different observables may be sensitive to different NP contributions. They will certainly also have different theoretical and experimental uncertainties and so provide an excellent check of existing measurements. To that end LHCb has measured the observable R(J/ψ) defined as , with τ + → µ +ν τ ν µ and J/ψ → µ + µ − [11]. B + c decays are unique to the LHC experiments as they cannot be studied at the B factories. The analysis method is very similar to the other two measurements; the R(J/ψ) observable is extracted by means of a three-dimensional binned template fit to the kinematic variables. In addition the B + c proper decay-time was fitted in order to control backgrounds from mis-identified muons. Most of these will come from B → J/ψX decays where the X is a hadron that has been mis-identified. As the B + c is shorter lived than the B 0 and B + the proper decay-time offers some discriminating power. The result of the fit is where the first uncertainty is statistical, the second systematic. This result is about 2 σ from the SM expectations of 0.25-0.28 [12][13][14][15] and represents the first evidence of the decay B + c → J/ψτ + ν τ . As previously the dominating systematic uncertainty is due to the limited size of the simulation samples. There is also a significant source of uncertainty from the B + c → J/ψ form-factors. There are lattice calculations underway so this source of uncertainty should be reduced in the future and the SM predictions will also become more precise. Future prospects at LHCb All of the analyses presented here were done using the 3 fb −1 of data collected during Run 1 of the LHC. It is expected that by the end of Run 2 in 2018 LHCb will have collected a further 6 fb −1 . The yield of b hadrons per fb −1 is greater in Run 2 than Run 1 due to the increased production cross-section at the higher collision energy (7 and 8 TeV in Run 1, 13 TeV in Run 2) and some improvements in the trigger. Therefore one can reasonably expect that the decrease in the statistical uncertainties of the measured quantities will be greater than a naive scaling for integrated luminosities. Beyond Run 2 the first upgrade to the LHCb detector [16] will be installed in the long shutdown two (LS2), commencing at the end of 2018. The upgraded detector will start collecting data in 2021 and it is envisaged that 50 fb −1 of integrated luminosity will be in hand by the end of LHC Run 4 (2030). Subsequently another upgrade of the detector is proposed [17] to fully take advantage of the HL-LHC and collect 300 fb −1 in total. In order to improve the precision on the presented results it will be necessary to produce larger samples of simulation as this is already the dominating source of systematic uncertainty. Assuming this technical difficulty can be overcome then there is little to prevent the full exploitation of the expected large-yield data samples. Furthermore one can hope that in collaboration with the theory community LHCb will be able to make several new measurements in this area in order to further disentangle these anomalies. Conclusions Three tests of lepton universality in semi-leptonic b decays carried out by the LHCb collaboration have been described. The two measurements of R(D * ) are consistent with the experimental average which shows a deviation from the SM prediction of approximately 3 σ [9]. The measurement of R(J/ψ) also suggests a tension with the SM although with lower significance. These were all made with the LHCb Run 1 dataset. At the time of writing the LHC is coming to the end of its second data-taking period by which time LHCb will have in total 9 fb −1 of data in hand. There is therefore much work to be done to analyse this large volume of data and shed further light on the lepton universality anomalies.
2019-04-23T13:23:18.786Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "1a36012b793889d4a69a2bc060fd0522a96cd790", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1137/1/012017", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "457cbf79afc3e1bc8c5c067e4da6550a5cfc5a47", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
234556981
pes2o/s2orc
v3-fos-license
Experimental Verification of Single-Stage Power Factor Correction Converter with Improved Light-Load Efficiency Single-stage single-switch ac/dc converters with power factor correction will have higher power losses under a light-load condition, as compared to that of two-stage approach, due to sharing of a common power transistor such that the power factor correction (PFC) stage cannot be switched OFF separately to save power losses. This paper addresses the problem by using a buck topology for the PFC stage of single-stage single-switch converters as it can completely turned OFF. This converter topology is capable of working in both buck mode and buck-boost mode depending on the rectified voltage at input side and power at load side. A simple and effective topology incorporating PFC for low power applications is simulated in Matlab/Simulink environment. Hardware implementation of the converter was and performance was verified with various practical light loads. It also satisfies the harmonic compliance of source current in accordance to the IEEE519:1992 recommendations. The control is very simple and gives good performance. Introduction In mid nineties, since the research of single-stage PFC converters, they have now been embraced for various low-control change hardware. The acknowledgment of converters is a direct result of their improved converter structure and controller circuit when compared with two-stage PFC design. As of late, energy efficiency concept has been joined into switching power supply design because of the implementation of stringent guidelines from administrative bodies, for example, Energy Star and AU/NZ Minimum Energy Performance Standards (MEPS). There are different ways to deal with reducing power utilization under a light-load condition. Including a small auxiliary power conversion circuit in parallel with the output load enhances the overall efficiency, especially under a light-load condition. The thought is to enhance both principle and assistant power conversions by fluctuating the sharing of aggregate output power among the two transformations under light load conditions. A single-switch bridgeless boost PFC converter is proposed in [1] employing a single switch that performs both PFC at the the input side and boosting of the dc voltage. It uses the same number of energy storage elements as in the conventional converter. It aims at minimizing the switching loss for improving the efficiency. The buck PFC front-end has higher light-load effectiveness than that of its boost counterpart because of its lower voltage stress on switching devices. At light load, switching loss is more overwhelming than conduction loss and the voltage stress is the key component in deciding For two-stage PFC design, turn off the PFC stage has been demonstrated a compelling strategy to decrease power loss. It is hard to apply the same technique to PFC converters as the PFC stage and dc/dc stage have the same power switch, unless the converter has various switches and is built diversely such that it can control currents from the AC line and intermediate storage capacitor to flow into the converter in partitioned time spaces inside of an switching period. The conceivable methodology with turning off the PFC stage in single-switch PFC converters like that in the two-stage approach is proposed in [2]. The thought takes advantages of varying the input voltage and dead angle of input current characteristic for the ac/dc converter. The buck or buck-derived PFC converter naturally has such trademark as there are times during a line cycle when the input voltage is smaller than the output voltage and the PFC stage is successfully turned off. The motivation behind this is consequently to clarify, analyze, and show this methodology through a selected single-stage single-switch buck-determined PFC topology under the light-load operation where the power consumption of the load is just a few watts or less and to assess diverse changing examples to scan for the least power misfortunes under this condition. Experimental results are displayed to exhibit the effectiveness of the proposed lightload power losses reduction plan. Main advantage of this topology is simple and it can be used for low power applications as well as improvement in power factor can be easily implemented by this converter. With the concept of zero-voltage switching boost-integrated technique (ZVS BIT), a ZVS flyback-boost converter with voltage doubler rectifier is derived and investigated in detail as an example. In addition, focused on the soft switching characteristics of the ZVS BIT, the LLFM control method is analyzed to extend a ZVS load range [3]. A buck-boost converter topology for implementing digital power factor correction based on low cost digital signal controller that operates the converter in continuous conduction mode, thereby significantly reducing input current harmonics [4]. A two-stage optimization procedure to optimize the power converter efficiency from light load to full load is proposed. The optimization procedure first breaks the converter design variables into many switching frequency loops. [5] A detailed design-oriented analysis of the clamped-current buck (CCB) PFC converter is presented. The design is focused on the slope of the external current ramp. It is shown that with a constant slope of the external current ramp in the whole input voltage range, an optimum design cannot be achieved. The slope of the external ramp should be variable and increase with increasing input voltage [6] The bulk capacitor voltage feedback with a coupled winding structure can effectively reduce both the voltage and current stresses in single-stage PFC ac/dc converters. It is also pointed out that changing the L m has no effect on bulk capacitor voltage and duty ratio if L m operates in CCM at heavy load [7]. By dynamically scaling the gate voltage swings of large, integrated MOS power devices, light-load efficiency can be improved and the usable load current range extended in synchronous rectifier buck converters [8]. The PFC for single phase AC-DC Buck-Boost Converter operated in Continuous Conduction Mode (CCM) using inductor average current mode control is discussed in [9]. It has the advantage of robustness even under large variations in the voltage and load variations. Single-stage PFC buck converter The circuit of single-stage PFC buck converter is shown in figure 1. Operating principle To smooth the advancement of the clarification of light-load operation in single-stage PFC converters, an as of late presented transformerless high step down single-stage PFC converter is Figure 1. Circuit of single-stage PFC buck converter. utilized as a sample. The converter, as appeared in figure 1, is a coordination of a buck PFC cell with a buckboost dc/dc cell. The buck PFC cell comprises of L 1 , S 1 , D 1 , C o and C B , while the buckboost dc/dc cell comprises of L 2 , S 1 , D 2 , D 3 , C o and C B . Since the information voltage V in changes somewhere around zero and its top worth, furthermore as the buck converter just works when data voltage is more noteworthy than output voltage, the proposed converter has two operation modes. Mode 1: is the dc-link capacitor voltage and V o is the output voltage). The buck PFC cell is inactive and there is no input current flowing into the converter. In this mode just the buck-boost dc/dc cell is active. At the point when the switch S 1 is turned ON, L 2 is charged by C B through D 2 . The voltage voltage is supported by the output capacitor C o . After duration of dT s , S 1 is turned off. The energy stored in L 2 is exchanged to output by means of D 3 . The inductor current i L2 is totally discharged before the beginning of the next switching cycle. After duration of dT s , S 1 is turned off. L 2 discharges its energy to output through D 3 while L 1 couples its energy to both output and dc bus capacitors as C o − R L and C B are in series of the current path of i L1 . Both i L1 and i L2 are totally discharged before the start of the following switching cycle. Design parameter In this section, the proposed buck and buck-boost PFC converter is designed with all circuit parameter [10,Chap. 13]. For buck converter: The duty ratio is given by where, V 0 = Output voltage of the converter V in = input voltage of the converter The critical value of the inductance L cric design is under the condition that buck converter, which can be expressed as where, R = Output load f = switching frequency of the converter The critical value of output capacitor is, Therefore, the critical inductor and capacitor of the proposed converter can be designed according to equation (2) and (3) where the inductance of inductor and output capacitance should be chosen higher than the critical value. For buck-boost converter: The critical value of the inductor L cric design is under the condition that buck converter, which can be expressed as same in equation (2), whereas the critical value of output capacitor is given by Therefore, the critical inductance and capacitance of the proposed converter can be designed according to equation (2) and (4) where the inductance and output capacitance should be chosen higher than the critical value. Simulation of single-stage PFC buck converter The single stage PFC buck converter for low power application and power factor correction circuit under open loop is designed and simulated under MATLAB/Simulink environment. Resistive load is considered for the simulation. The design parameters of the converter are listed in Table 1. Simulation studies The single-stage PFC buck converter is designed following design constraints and is implemented in Simulink environment. Appropriate measurement blocks are used for measuring the fundamental component of voltage and current to calculate the phase difference between the current and voltage at the source side. The converter is simulated under open loop configuration with a resistive load. A 24 V ac supply is applied to diode bridge rectifier. An LC -filter is used to reduce ripple content of rectifier output. The presence of this LC-filter causes distortion in the supply current, resulting in high THD, losses and poor power factor. The results of simulation are described as follows. The supply voltage and current of the ac supply is shown in figure 2. Upon comparing the zero-crossing of the source current and voltage, the power factor is near to unity. Waveforms of buck converter output voltage and current. The harmonic spectrum of source current computed using FFT tool in simulink is shown in figure 3. The THD of the ac source current is 3.46 %. The power factor corresponding to this operating point is given by P owerf actor(pf ) = 1 1 + T HD 2 = 0.995(lag) The output voltage and current of the rectifier is shown in figure 4 and the source current is discontinuous. In figure 5, the output voltage and current of buck converter are shown where the load current is continuous. From the results, output power is computed as 10 W satisfying the low power application for converter. Hardware implementation The simulated single stage PFC buck converter is implemented in hardware and tested for its performance under laboratory conditions. The converter was tested with two different types of loads wiz resistive load and LED load. The experimental setup of single stage PFC buck converter is shown in figure 6. The low ac voltage is derived from supply mains through an autotransformer and is applied to the bridge rectifier. It is then converted into dc and applied to the PFC buck converter. With 'R' load The circuit is tested for its performance with resistive load connected at the output. A rheostatic load is used so as to adjust the current level of the converter to its rated value. The waveforms are captured using Tektronix TPS2024B four channel isolated DSO with suitable voltage probe and current probe arrangement. The captured source voltage, source current, output voltage and output current are shown in figure 7. The source current is nearly sinusoidal and is in phase with the supply voltage. The harmonic distortion in the source current and other parameters such as real, reactive and apparent powers are measured using Fluke 3B -power quality analyzer. The captured source voltage and current waveform with measurements are shown in figure 8 and the power measurement is presented in figure 9 respectively. The harmonic spectrum of source current in figure 10 measures the current THD to be 29%.The power factor of the ac source from the measurements is 0.93 lag and the displacement power factor is unity. With LED load The experiment is repeated with LED strip as load on the PFC buck converter. The performance of the converter with LED load is observed using Tektronix DSO and Fluke-3B power quality analyzer. The source voltage and current, load voltage and current waveform captured in Textronix DSO is shown in figure 11. The source current and voltages are captured using Fluke-3B power quality analyzer is shown in figure 12. The power drawn by the converter is 73 W at a power factor of 0.97 lag. The Figure 10. Harmonic spectrum of Source current captured using Fluke-3B Power Quality Analyzer Figure 11. Measurements of source current captured using TPS2024B DSO for LED load. harmonic spectrum of the source current is measured and is shown in figure 13. For both types of loads, the power factor is measured to be 0.93 and 0.97 respectively which is close to unity. The hardware is powered continuously under lab environment for the duration of 90 minutes with LED load and tested for reliable operation. The objective is to test the converter performance under continuous operation. The converter performance was observed to be satisfactory. Conclusion Single-stage PFC buck converter for low power applications was implemented in hardware and tested for its performance. The design of the converter is validated through simulation using Matlab/Simulink tool. The hardware model is tested with two different types of load and the power factor of the ac supply is observed to be close to unity. In the absence of DC -DC converter stage, the power factor of the ac supply is poor due to LC -filter at the rectifier which cannot be eliminated. The single stage converter can work more reliably at low output power with reduced losses when compared with conventional two-stage PFC circuits. The single stage PFC converters have more than one power processing stage and hence its efficiency will be lower than conventional buck PFC rectifier. In future, the converter performance can be analyzed with sudden load changes or sudden changes in the supply voltage conditions. Based on the performance, a suitable controller can be designed to regulate the converter under supply voltage fluctuations and/or load changes.
2021-05-16T00:03:50.768Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "eb84b0be78f134eb290724d3a4efca9721dafeb7", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1716/1/012002", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "c345ff7b16d3fa0fc2d13aefe28dbe8a2588b56a", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
11643326
pes2o/s2orc
v3-fos-license
Molecular and Cellular Pathobiology Kiaa1324 Suppresses Gastric Cancer Progression by Inhibiting the Oncoprotein Grp78 Recent advances in genome and transcriptome analysis have contributed to the identification of many potential cancer-related genes. Furthermore, biological and clinical investigations of the candidate genes provide us with a better understanding of carcinogenesis and development of cancer treatment. Here, we report a novel role of KIAA1324 as a tumor suppressor in gastric cancer. We observed that KIAA1324 was downregulated in most gastric cancers from transcriptome sequencing data and found that histone deacetylase was involved in the suppression of KIAA1324. Low KIAA1324 levels were associated with poor prognosis in gastric cancer patients. In the xenograft model, KIAA1324 significantly reduced tumor formation of gastric cancer cells and decreased development of preformed tumors. KIAA1324 also suppressed proliferation, invasion, and drug resistance and induced apoptosis in gastric cancer cells. Through protein interaction analysis, we identified GRP78 (glucose-regulated protein 78 kDa) as a KIAA1324-binding partner. KIAA1324 blocked oncogenic activities of GRP78 by inhibiting GRP78–caspase-7 interaction and suppressing GRP78-mediated AKT activation, thereby inducing apoptosis. In conclusion, our study reveals a tumor suppressive role of KIAA1324 via inhibition of GRP78 oncoprotein activities and provides new insight into the diagnosis and treatment of gastric cancer. Introduction Gastric cancer is the fourth most common type of cancer and the second leading cause of death from cancer worldwide (1).Although clinical treatment for gastric cancer has improved, gastric cancer therapy still remains challenging due to the difficulty in early detection of gastric cancer and complexities of the disease (2).Therefore, it is crucial to investigate novel genes that govern the development and progression of gastric cancer to elucidate the process of gastric carcinogenesis and develop effective gastric cancer treatments. Recently, with advances in next-generation sequencing (NGS) technology, analysis of genomic and transcriptomic alterations in gastric cancer patients has been used to develop innovative methods for diagnosis and treatment of gastric cancer (3,4).Molecular drivers of gastric cancer have been suggested through NGS-based genomic and transcriptomic analyses of mutation, deletion, amplification, fusion, and expression level.However, because gastric cancer is regarded as a heterogeneous and complex disease, the biological functions and clinical relevance of the candidate driver genes should be scrutinized for their applications in gastric cancer therapy. GRP78 (glucose-regulated protein 78 kDa) is a well-known therapeutic target gene that is highly activated in various cancers, including gastric cancer (5)(6)(7)(8).GRP78, also known as HSPA5 (heat-shock protein alpha 5), is a chaperone involved in protein folding in the endoplasmic reticulum (ER) and increases cell survival against apoptotic stresses, including anticancer drugs, by interacting with proapoptotic factors such as caspase-7 and BIK (9)(10)(11).In cancer, GRP78 is often relocalized to the plasma membrane (12,13).Cell surface GRP78 activates AKT signaling through interaction with PI3K, thereby promoting tumorigenesis (14,15).GRP78 inhibitors, such as small molecules and specific binding peptides, cause growth inhibition and apoptosis of cancer cells.Therefore, more effective drugs targeting GRP78 have been developed for cancer therapy (12,16,17).GRP78 has also been investigated and suggested as a biomarker and therapeutic target for gastric cancer therapy (18)(19)(20). KIAA1324 gene, also known as EIG121 (estrogen-induced gene 121), encodes a 1,013 amino acid (a.a.) transmembrane protein that is highly conserved among species (21).The correlation between KIAA1324 expression and prognosis in endometrial, ovarian, and pancreatic cancer patients was previously reported (21)(22)(23), but the role of KIAA1324 in gastric cancer has not been investigated yet.Deng and colleagues demonstrated that KIAA1324 localizes at the plasma membrane and endomembranes and is involved in autophagy (24).However, the biological functions of KIAA1324 are still poorly understood. Here, we identified KIAA1324 as a candidate gastric tumor suppressor based on our transcriptome sequencing data from the tissues of gastric cancer patients.To investigate the potential tumor suppressive role of KIAA1324 in gastric cancer, we analyzed the correlation between KIAA1324 expression and gastric cancer patient prognosis and evaluated the tumorigenic abilities of gastric cancer cells expressing exogenous KIAA1324 using in vitro and in vivo assays.We observed that low KIAA1324 levels were associated with poor prognosis in gastric cancer patients.KIAA1324 inhibited growth, invasiveness, and tumorigenic activity of gastric cancer cells and induced apoptosis by blocking the oncogenic activities of GRP78.Taken together, our data suggest a novel role of KIAA1324 as a tumor suppressor and a prognostic indicator in gastric cancer. Primary gastric cancer tissues Fifty pairs of gastric cancer and normal matched control tissues were obtained from the gastric cancer depository of the Gastrointestinal Division in Department of Surgery at Seoul National University Hospital (Seoul, Republic of Korea).The Institutional Review Board of the Seoul National University Hospital approved management of the tissue depository and use of the tissues (IRB no.H-0806-072-248). Cell culture and transfection Human gastric cancer cell lines, MKN28, AGS, and SNU16 were obtained from Korean Cell Line Bank, authenticated by short tandem repeat profiling.These cell lines were expended and used within 10 passages.The cells were maintained in RPMI1640 containing 25 mmol/L HEPES (WelGENE) supplemented with 10% FBS (WelGENE), 100 U/mL of penicillin, and 100 mg/mL of streptomycin (WelGENE).293T cells were maintained in DMEM (WelGENE) supplemented with 10% FBS, 100 U/mL of penicillin, and 100 mg/mL of streptomycin.FuGENE6 (Roche Applied Science) was used as a transfection reagent. Establishment of gastric cancer cells expressing the KIAA1324 gene or KIAA1324 shRNA We established MKN28 and AGS cell lines expressing KIAA1324 in a doxycycline-dependent manner (tet-on) using a retroviral system.SNU16 cell lines stably expressing control or KIAA1324 shRNA were generated as described previously (25).To establish MKN28 and AGS cell lines expressing KIAA1324 in a doxycycline-dependent manner (tet-on), the human KIAA1324 gene was cloned into pCMV-3HA vector (Clontech).For virus generation, 3HA-tagged KIAA1324 gene was inserted into a retroviral vector, pRetroX (Clontech).Viruses containing KIAA1324 gene or tetracycline response activator gene were produced according to the manufacturer's protocol (Cell Biolabs, Inc.).MKN28 and AGS tet-on KIAA1324 cells were generated by infection with these viruses and selected by 2 mg/mL G418 and 2 mg/mL puromycin.For KIAA1324 knockdown, lentiviral constructs containing KIAA1324 shRNA (TRCN0000263309 and TRCN0000263310) were purchased from Sigma.SNU16 cell lines stably expressing control or KIAA1324 shRNA were generated by lentiviral infection and selected with 2 mg/mL puromycin. Annexin V-positive cell population analysis Annexin V staining was performed with BD Pharmingen Annexin V Apoptosis Detection Kit (BD Biosciences) according to the manufacturer's protocol.Cells were trypsinized and washed twice with PBS.The cells were incubated in binding buffer containing Annexin V-FITC and propidium iodide (PI).Stained cells were analyzed by flow cytometry using CELLQUEST program (Becton Dickinson). Statistical analyses All quantitative data are expressed as the mean AE SD.Kaplan-Meier analysis and Pearson c 2 tests were performed using SPSS version 21.0 statistical software (IBM SPSS).Student t tests and one-way ANOVA were conducted using GraphPad Prism version 5 (GraphPad Software Inc.).P < 0.05 was considered statistically significant. KIAA1324 expression is suppressed in most gastric cancers To identify novel gastric cancer-related genes, we analyzed transcriptome sequencing data acquired from 16 paired normal and tumor tissues of gastric cancer patients and 18 gastric cancer cell lines (26) and sorted differentially expressed genes (DEG) between gastric cancer and normal tissues (P < 0.01 with >2-fold change; Fig. 1A).Among the DEGs, we focused on KIAA genes, which were initially identified through the Kazusa cDNA project, because unknown genes often provide new insight into understanding cancer, and most of KIAA genes have remained functionally uncharacterized (27).We confirmed that expression patterns of the selected KIAA genes in gastric cancer cell lines were similar to those in primary tumor tissues (Fig. 1A).In particular, our data showed upregulation of KIAA1524, which is also known as CIP2A (cancerous inhibitor of protein phosphatase 2A) and was reported as an oncogene in gastric cancer (28).However, other KIAA genes have been poorly investigated, especially in gastric cancer.In this study, we focused our attention on the role of KIAA1324, which is the most significant downregulated KIAA gene in this set of gastric tissues and cancer cell lines.Suppressed KIAA1324 expression in most tumor tissues and gastric cancer cell lines was further validated using quantitative reverse transcription PCR (qRT-PCR; Supplementary Fig. S1).We also examined KIAA1324 expression in paired tissues of additional gastric cancer patients using qRT-PCR.As expected, KIAA1324 expression was significantly suppressed in cancer tissues (P < 0.0005; Fig. 1B).Paired comparison analysis of the tissues showed that 78% of patients had low levels of KIAA1324 at the tumor site compared with the normal region (Fig. 1C).These results were further supported by public microarray data (GSE13861; ref. 29), which showed low KIAA1324 expression in gastric tumor tissues (P ¼ 0.0002; Fig. 1D). The loss of gene expression is mainly caused by genetic alteration or epigenetic modification.Therefore, to investigate the mechanism by which KIAA1324 is regulated in gastric cancer cell lines, we first examined the correlation between gene copy number variation (CNV) and transcription level using CCLE database analysis (Supplementary Fig. S2).Copy number of KIAA1324 gene in 29 gastric cancer cell lines was barely changed and KIAA1324 expression was low in most gastric cancer cell lines.This result suggested that suppressed KIAA1324 expression is independent of copy number variation (CNV).Next, we explored whether KIAA1324 expression was regulated epigenetically using decitabine, a DNA methylation inhibitor, and MS-275, a synthetic histone deacetylase inhibitor.While MS-275 treatment restored KIAA1324 transcription in these gastric cancer cell lines with negligible expression, decitabine did not influence KIAA1324 expression (Fig. 1E).Combination treatment with MS-275 and decitabine in MKN1 cells enhanced restoration of KIAA1324 transcription compared with MS-275 treatment alone, but not in MKN28 and SNU638 cells.It has been known that densely methylated DNA associates with transcriptionally repressive chromatin characterized by the presence of underacetylated histones, and these two epigenetic processes have been dynamically linked (30).Our data suggest that the density of CpG island methylation and the level of histone deacetylation of the KIAA1324 gene vary among the three cell lines used in our study.Taken together, these results suggest that epigenetic inhibition of KIAA1324 may be favored during carcinogenesis, indicating a possible role of KIAA1324 as a tumor suppressor in gastric cancer. Low levels of KIAA1324 are associated with poor prognosis in gastric cancer patients To evaluate the clinical impact of KIAA1324, we performed a gastric tumor tissue microarray using the anti-KIAA1324 antibody and tumor tissues from 428 patients.As shown in Fig. 2A, the tumor tissues were classified into four groups (negative, weak, moderate, or strong) according to KIAA1324 expression.On the basis of this classification, we analyzed the cumulative survival rate of gastric cancer patients who provided the tumor tissues using the Kaplan-Meier test (Fig. 2B).We found that lower KIAA1324 expression was correlated with reduced survival rates of patients (P < 0.001).We also analyzed the relationship between KIAA1324 expression and clinicopathologic features (Fig. 2C).Patients aged 65 years or younger had lower KIAA1324 expression compared with older patients (P ¼ 0.038).However, KIAA1324 expression was not associated with gender (P ¼ 0.389).KIAA1324 expression was negatively correlated with tumor invasion (P ¼ 0.001), pTNM stage (P < 0.001), lymph node metastasis (P ¼ 0.005), and distant metastasis (P ¼ 0.001).In particular, patients who had KIAA1324-deficient gastric tumors tended to develop more advanced and invasive gastric cancer.These results indicate that low levels of KIAA1324 expression are significantly correlated with poor prognosis of gastric cancer patients. KIAA1324 inhibits in vivo tumor formation of gastric cancer cells To investigate a possible role of KIAA1324 as a tumor suppressor in gastric cancer, we examined the effect of KIAA1324 on in vivo tumorigenesis of gastric cancer cells.For the xenograft assay, we generated stable MKN28 gastric cancer cell lines with tetracyclineinducible (tet-on) luciferase (Luc) or KIAA1324 expression.Luciferase was used as a control.We injected the MKN28 cells subcutaneously into mice after inducing KIAA1324 expression with doxycycline, a tetracycline analog, and observed tumor formation (Fig. 3A).Induction of KIAA1324 dramatically decreased the tumor formation ability of MKN28 cells as demonstrated by significantly reduced tumor sizes and weights at the time of harvest (P < 0.0005; Fig. 3B-D).Next, we investigated whether KIAA1324 affects the development of preformed tumors.Three weeks after subcutaneous injection of MKN28 cells with tet-on Luc or KIAA1324, the mice were fed water containing doxycycline every other day for a month (up to 7 weeks) to induce KIAA1324 expression (Fig. 3E).Tumor formation was observed within 3 weeks after injection.We measured tumor sizes weekly and obtained tumor weights at the time of harvest (Fig. 3E-H).The results showed that KIAA1324 induction significantly reduced the size (P < 0.0005) and weight (P < 0.001) of preformed tumors.KIAA1324 expression in tumors was verified by RT-PCR (Fig. 3I).These data demonstrate that KIAA1324 inhibits the tumorigenic activity of gastric cancer cells in vivo and the development of preformed tumors. KIAA1324 suppresses growth, invasion, and drug resistance of gastric cancer cells We further investigated whether KIAA1324 influences the characteristic features of gastric cancer cells, including proliferation, invasiveness, and drug resistance.In addition to MKN28 cells, we established stable AGS gastric cancer cell lines with tet-on Luc or KIAA1324.The proliferation assay showed that KIAA1324 remarkably inhibited growth of MKN28 and AGS cells (Fig. 4A).KIAA1324 also suppressed anchorage-dependent and -independent colony forming activities of gastric cancer cells (Fig. 4B and C).To examine the effect of KIAA1324 on the invasiveness of gastric cancer cells, we assessed the migration and invasion ability of MKN28 and AGS cells expressing KIAA1324.As shown in Fig. 4D and E, KIAA1324 significantly reduced the migration and invasion of gastric cancer cells.We next explored whether KIAA1324 regulates the drug resistance of gastric cancer cells by treating MKN28 and AGS cells expressing KIAA1324 with anticancer drugs such as cisplatin and etoposide (Fig. 4F).Cells expressing KIAA1324 showed decreased cell viability in presence of cisplatin or etoposide compared with control cells.In addition, cisplatin-and etoposide-mediated apoptosis were increased in KIAA1324-expressing MKN28 cells (Supplementary Fig. S3).These results suggested that KIAA1324 caused the cells to become more sensitive to anticancer drugs.To further investigate whether the loss of KIAA1324 influences proliferation and drug resistance in SNU16 gastric cancer cells with KIAA1324 expression, we made an SNU16 cell line stably expressing KIAA1324 shRNA using a lentiviral system.KIAA1324 knockdown was confirmed by qRT-PCR (Fig. 4G).Although KIAA1324 knockdown did not dramatically affect SNU16 cell proliferation, it markedly enhanced cisplatin resistance (Fig. 4H and I).Moreover, loss of KIAA1324 decreased cisplatin-and staurosporine-induced apoptosis (Fig. 4J and Supplementary Fig. S4).Taken together, our data demonstrate that KIAA1324 inhibits the growth, invasiveness, and drug resistance of gastric cancer cells. Ectopic expression of KIAA1324 induces apoptosis of gastric cancer cells To examine whether KIAA1324 induces apoptosis of gastric cancer cells, we performed annexin V staining and flow cytometry analysis (Fig. 5A).Induction of KIAA1324 expression increased the annexin V-positive cell population, indicating that KIAA1324 induced apoptosis in MKN28 and AGS cells.We also confirmed KIAA1324-mediated apoptosis using a TUNEL assay (Fig. 5B).Next, we examined the expression of apoptosis markers using immunoblotting and RT-PCR.We verified activation of caspase-3, an apoptosis effector caspase protein, in gastric cancer cells expressing KIAA1324 by detecting cleavage of caspase-3 (Fig. 5C).Interestingly, expression of proapoptotic genes, such as BAX and BIM, were also increased by KIAA1324 (Fig. 5D).In addition, we investigated whether KIAA1324 affects cell-cycle distribution.KIAA1324 did not have a considerable effect on cell cycle, while inducing apoptosis (Supplementary Fig. S5).These results suggest that KIAA1324 exerts main effect on apoptosis.Because it has been previously mentioned that KIAA1324-mediated apoptosis might occur through excessive autophagy (24), we examined the expression of LC3B, an autophagy marker (Supplementary Fig. S6).However, we observed no significant difference between control and KIAA1324-expressing cells.In summary, our data suggest that KIAA1324 induces apoptosis through activation of a caspase cascade rather than autophagy. The transmembrane domain of KIAA1324 is important for KIAA1324-mediated apoptosis It has been reported that KIAA1324 is mainly localized in the membrane fraction, and deletion of its transmembrane domain (TM) limits its localization to the cytosol (24).To explore the role of TM in KIAA1324-induced apoptosis, we generated MKN28 and AGS cells with a tet-on TM-deleted mutant of KIAA1324 (KIAA1324 DTM).We confirmed expression of KIAA1324 DTM KIAA1324 decreased the in vivo tumorigenic activity of gastric cancer cells.A, MKN28 cells harboring tetracycline-inducible (tet-on) luciferase (Luc) or KIAA1324 were treated with 1 mg/mL doxycycline for 24 hours before subcutaneous injection into 6 mice per group.After injection, mice were fed 2 mg/mL doxycycline (dox) in 10% sucrose water every two days until the time of harvest.Tumor size was measured at indicated time.B, representative images of harvested tumors.Size (C) and weight (D) analyses of the harvested tumors.E, MKN28 cells harboring tet-on Luc or KIAA1324 were injected subcutaneously in 10 mice per group.Three weeks after injection, mice were fed 2 mg/mL doxycycline in 10% sucrose water every two days until the time of harvest.F, representative images of harvested tumors.Size (G) and weight (H) of the harvested tumors were analyzed.I, KIAA1324 expression in tumors was examined using RT-PCR.Ã , P < 0.005; ÃÃ , P < 0.001; ÃÃÃ , P < 0.0005; n.s., not significant. using immunoblotting (Fig. 6A and B).As shown in Fig. 6C, TM deletion resulted in defective cellular localization of KIAA1324.KIAA1324 DTM neither increased the annexin V-positive cell population nor activated caspase-3 (Fig. 6D and E).These results indicate that the TM of KIAA1324 is responsible for its ability to induce apoptosis as well as its cellular localization. To further examine which region of KIAA1324, besides TM, is required for KIAA1324-mediated apoptosis, we analyzed apoptosis in cells expressing KIAA1324 mutants with deletion of the Nterminal side (DN) or C-terminal side (DC) of the TM.KIAA1324 DC exhibited similar cellular localization and apoptosis induction as wild-type KIAA1324 (Supplementary Fig. S7), whereas KIAA1324 DN did not induce apoptosis even though its localization was similar to wild-type KIAA1324 (Supplementary Fig. S8).To find more specific domain that is important for KIAA1324mediated apoptosis, we examined the effects of a.a.304-931 (DC1) and a.a.657-931 (DC2) of KIAA1324 on apoptosis induction.While KIAA1324 DC induced apoptosis, DC1 and DC2 did not have any effect in KIAA1324-mediated apoptosis (Supplementary Fig. S9).These results demonstrate that KIAA1324 induces apoptosis of gastric cancer cells through the TM and a.a.1-303 region, suggesting that its cellular localization and the KIAA1324 inhibited proliferation, invasiveness, and drug resistance of gastric cancer cells.A, doxycycline-induced KIAA1324 expression in MKN28 and AGS cells harboring tet-on KIAA1324 was verified by immunoblotting.Proliferation of MKN28 and AGS cells harboring tet-on Luc or KIAA1324 was evaluated daily in the presence or absence of 1 mg/mL doxycycline.B, representative images of methylene blue-stained colonies of MKN28 and AGS cells expressing Luc or KIAA1324.Relative colony forming unit (CFU) was calculated by dividing the colony number of doxycycline-untreated cells by that of doxycycline-treated cells.C, soft agar colony forming assay was performed to investigate the effect of KIAA1324 on anchorage-independent colony formation of MKN28.Migration (D) and invasion (E) abilities of MKN28 and AGS cells expressing Luc or KIAA1324 were measured using Transwell migration and invasion assay, respectively.The relative migration or invasion rates were calculated by dividing cell number of doxycycline-untreated cells divided by that of doxycycline-treated cells.F, cell viability of MKN28 and AGS cells harboring tet-on KIAA1324 was measured 24 hours after treatment with 25 mmol/L cisplatin or 25 mmol/L etoposide in the absence or presence of 1 mg/mL doxycycline.G, KIAA1324 knockdown in SNU16 cells expressing KIAA1324 shRNA was examined using qRT-PCR.H, growth of SNU16 cells expressing control or KIAA1324 shRNA was measured daily.Cell viability (I) and caspase-3 activation (J) of SNU16 cells expressing control or KIAA1324 shRNA were evaluated 24 hours after treatment with 25 mmol/L cisplatin by cell counts and immunoblotting, respectively.Ã , P < 0.005; ÃÃÃ , P < 0. GRP78 oncoprotein is identified as a KIAA1324-binding partner To elucidate the regulatory mechanism of KIAA1324-induced apoptosis, we identified KIAA1324-specific binding partners through protein interaction analysis using a formaldehyde cross-linking method (Fig. 7A; ref. 31).We found GRP78 as a KIAA1324-binding partner and validated the interaction between KIAA1324 and GRP78 (Fig. 7B).Immunofluorescence analysis also demonstrated that KIAA1324 colocalized with GRP78 (Fig. 7C).Because GRP78 is predominantly localized in the ER, we investigated whether KIAA1324 also exists in the ER by examining colocalization with PDI, an ER marker.As expected, KIAA1324 was found in the ER (Supplementary Fig. S10).KIAA1324 was also located in the cell membrane in accordance with the previous report (24). To determine the binding region of GRP78 with KIAA1324, we examined interactions between KIAA1324 and various domaindeleted mutants of GRP78 (Fig. 7D-F).Immunoprecipitation assays showed that a.a.1-80 of GRP78, which contains Thr37, an ATP-binding site (32), was required for the GRP78-KIAA1324 interaction.In addition, analysis of the interactions between GRP78 and domain-deleted mutants of KIAA1324 demonstrated that a.a.1-303 of KIAA1324 was responsible for the interaction with GRP78 (Fig. 7G and H and Supplementary Fig. S11). KIAA1324 inhibits the oncogenic activity of GRP78 GRP78 increases cancer cell survival by exerting antiapoptotic activity by interacting with caspase-7 in the ER and by activating pro-proliferative PI3K/AKT signaling in the cell membrane.Given that inhibition of GRP78 induces apoptosis and increases anticancer drug sensitivity in gastric cancer cells (16,20), we investigated the effect of siRNA-mediated GRP78 knockdown on MKN28 cells.As expected, we found that GRP78 knockdown induced apoptosis of MKN28 cells and decreased AKT phosphorylation (Supplementary Fig. S12). It has been reported that Thr37 of GRP78 is important for ATPinduced conformational change, which is critical for the interaction of GRP78 with its binding partners (32), and the N-terminal side of the GRP78 ATPase domain is required for AKT activation (15).Therefore, because KIAA1324 interacted with N-terminal region of GRP78, we first investigated whether KIAA1324 affects GRP78 binding to caspase-7.Indeed, KIAA1324 wild-type and DC mutant blocked the interaction between GRP78 and caspase-7 and induced cleavage of caspase-7 (Fig. 7I).However, KIAA1324 DTM and N-terminal region (N) did not affect GRP78 binding to caspase-7 despite their interaction with GRP78.This result suggests that the TM of KIAA1324 is required for KIAA1324 to interfere with the interaction between GRP78 and caspase-7.Next, we investigated whether KIAA1324-mediated regulation of GRP78 affects AKT activation by examining AKT phosphorylation (Fig. 7J).The result showed reduction of AKT phosphorylation in MKN28 cells expressing KIAA1324.Taken together, these data KIAA1324 induced apoptosis of gastric cancer cells.A, KIAA1324 expression was induced in MKN28 and AGS cells harboring tet-on KIAA1324 by treatment with 1 mg/mL doxycycline for 36 hours.Flow cytometry analysis was performed using cells stained with Annexin V-FITC and propidium iodide (PI).The Annexin V-positive and PI-negative population indicates the early apoptotic cells.The right panel shows quantification of early apoptotic cell population from triplicate samples.B, TUNEL assay was conducted using MKN28 cells harboring tet-on KIAA1324 with or without 1 mg/mL doxycycline for 48 hours.C, cleavage of caspase-3 and PARP in MKN28-and AGS-expressing KIAA1324 were examined by immunoblotting.D, mRNA levels of BAX, BIM, KIAA1324, and GAPDH in MKN28 and AGS cells harboring tet-on KIAA1324 were evaluated by RT-PCR at the indicated times after treatment with 1 mg/mL doxycycline.Ã , P < 0.02; ÃÃ , P < 0.003.demonstrate that KIAA1324 not only inhibits the interaction between GRP78 and caspase-7, but also may regulate GRP78mediated AKT activation, suggesting that KIAA1324 may exert antitumor activity by suppressing the oncogenic activities of GRP78 (Supplementary Fig. S13). Discussion To date, a number of tumor suppressor genes have been identified and investigated for their function in tumorigenesis.However, many potential tumor suppressor genes remain unknown and uncharacterized.Recently, analysis of genetic alterations and transcriptome changes in cancer tissues and cell lines using NGS has become a promising method to identify candidate tumor suppressor genes (3,33).Moreover, studying the biological functions of these genes improves our understanding of the underlying mechanisms of carcinogenesis and aids in cancer prevention and therapeutics.In the current study, KIAA1324 was identified as a novel tumor suppressor that is downregulated in human primary gastric cancer tissues and cell lines through total mRNA sequencing analysis (Fig. 1).CNV analysis and an epigenetic modulation assay of gastric cancer cell lines showed that suppression of KIAA1324 expression in gastric cancer cells is caused by epigenetic regulation rather than genetic alteration (Fig. 1E and Supplementary Fig. S2).A study of the clinical impact of KIAA1324 demonstrated that KIAA1324 can be used as a biomarker for prognostic prediction of gastric cancer (Fig. 2).Furthermore, investigation of the effects of KIAA1324 on proliferation, tumorigenic activity, and apoptosis of gastric cancer cells indicated that KIAA1324 may function as a gastric tumor suppressor (Figs. 3, 4, 5).In particular, induction of KIAA1324 expression in preformed tumors significantly reduced tumor size (Fig. 3E-H).This result suggests that, with remarkable recent developments in tumor-specific drug or gene delivery system for cancer therapy (34,35), application of KIAA1324 gene delivery or a KIAA1324-inducible drug release system specifically targeting gastric tumors can be a feasible strategy for gastric cancer therapy. GRP78 has been regarded as a promising therapeutic target for cancer therapy.As GRP78 enhances cancer cell survival by protecting cancer cells from apoptotic stresses such as anticancer drugs, targeting GRP78 increases efficacy of cancer treatment (17).In cancer, the role of GRP78 in the cell membrane as well as the ER have drawn interest because increased cell surface GRP78 is detected in various cancers.GRP78 is principally localized in the ER lumen; however, it also has been demonstrated that GRP78 has putative transmembrane domains and localizes at the cellular membrane as well as the ER membrane (9,13).Cell surface GRP78 has been reported to regulate proliferation, migration, and invasion of cancer cells via regulation of various cellular signaling pathways, including the TGFb and AKT signal pathways (36)(37)(38)(39).Peptides targeting cell surface GRP78 have suppressed tumor growth and invasion, suggesting that GRP78-targeting peptides could be a therapeutic strategy for patients with cell surface GRP78-positive tumors (12,40).We demonstrated that KIAA1324 physically interacted with the N-terminal domain of GRP78 through its N-terminal region and might regulate GRP78mediated AKT activation (Fig. 7).A domain prediction program indicated that the N-terminal region of KIAA1324 may be an extracellular domain, and a previous report and our immunofluorescence data showed that KIAA1324 localized at cell membrane (24).Taken together, these findings suggest that KIAA1324 may also modulate cell surface GRP78 extracellularly through its Nterminal domain.On the basis of our findings, identification of the critical motif of KIAA1324, which is necessary for the interaction with GRP78, may be a platform for the development of GRP78-targeting KIAA1324 peptides as anticancer therapeutic agents. In cancer, the role of GRP78 as a receptor at the cell surface depends on extracellular ligands.Alpha-2 macroglobulin interacts with the N-terminal region of cell surface GRP78 and activates AKT signaling, leading to increased cell proliferation (41).However, Kringle 5 induces caspase-7 activation by binding to cell surface GRP78 (40).Par-4 also induces apoptosis via interaction with cell surface GRP78 and activation of the FADD-caspase 8caspase-3 pathway (42).In this study, we demonstrated that through interaction with GRP78, KIAA1324 induced activation of caspase-3 and -7 and decreased AKT signaling.Considering previous reports, our data suggest that KIAA1324 may release caspase-7 from GRP78 in the ER, activate the FADD-caspase 8-caspase-3 pathway in the cell membrane, and block alpha-2 macroglobulin from binding to cell surface GRP78, thereby inducing apoptosis.KIAA1324 inhibited antiapoptotic activity of GRP78.A, a scheme for the identification of KIAA1324-binding partners.B, twenty-four hours after treatment with 1 mg/mL doxycycline in MKN28 cells harboring tet-on KIAA1324, immunoprecipitation assay was performed using the anti-HA antibody to verify the interaction between HA-tagged KIAA1324 and GRP78.C, immunofluorescence assays were conducted using anti-HA and anti-GRP78 antibodies to detect colocalization of KIAA1324 and GRP78.Nuclei were stained with DAPI.D, a schematic diagram of the GRP78 domain deletion constructs.SB, substrate-binding domain.E and F, interactions of the transfected HA-tagged KIAA1324 with GFP-tagged GRP78 WT and domain deletion mutants were assessed using immunoprecipitation assay in 293T cells.G, a schematic diagram of KIAA1324 domain deletion mutants.H, immunoprecipitation assay was performed by transfecting GRP78 without tag and HA-tagged KIAA1324 WT and domain deletion mutants in 293T cells.I, the interaction between GRP78 and caspase-7 was evaluated in MKN28 harboring tet-on KIAA1324 WT, DTM, DC, or N mutant using immunoprecipitation assay 24 hours after treatment with 1 mg/mL doxycycline.J, phosphorylation of AKT was assessed in MKN28 cells expressing Luc or KIAA1324 by immunoblotting. Deng and colleagues reported that KIAA1324 was predominantly in the membrane fraction and colocalized with autophagosome markers, suggesting that it is involved in autophagy (24).They also suggested that the observed KIAA1324-induced apoptosis in 293T cells occurred due to excessive autophagy.However, autophagic death and apoptosis are regarded as different types of cell death (43,44).Furthermore, we could not observe KIAA1324-mediated autophagy in gastric cancer cells (Supplementary Fig. S6).In our study, KIAA1324 induced cell death via an apoptotic mechanism.We found that KIAA1324 blocked the interaction between GRP78 and proapoptotic caspase-7, suggesting that KIAA1324 induces apoptosis of gastric cancer cells by inhibiting the antiapoptotic activity of GRP78. It has been reported that transmembrane proteins in the ER, such as TMEM166 and TMEM214, are involved in the induction of apoptosis (45,46).TMEM166 contains a single transmembrane domain and induces both autophagy and apoptosis.Compared with normal tissues, TMEM166 is downregulated in gastric adenocarcinoma (47).Adenovirus-mediated introduction of TMEM166 suppressed tumor growth through autophagy and apoptosis (48).TMEM214, which contains two transmembrane domains and is localized to the outer membrane of the ER, regulates ER stress-induced apoptosis through activation of caspase-4 (46).Here, we observed that KIAA1324, a transmembrane protein, induced apoptosis in the ER, and TM deletion abolished its apoptotic activity even though the TM deletion mutant bound to GRP78.It is possible that these transmembrane proteins interact with each other at the ER membrane to induce apoptosis by regulating GRP78.Further investigation of the interaction among these proteins will provide a better understanding of KIAA1324-mediated apoptosis in cancer. AKT signaling plays a key role in various cellular processes including proliferation, survival, metabolism, differentiation, and apoptosis (49).Loss of AKT inhibitors such as PTEN and SHIP or upregulation of AKT activators such as GRP78 and Src induces dysregulation of AKT activation and has been implicated in carcinogenesis.In the current study, we observed that KIAA1324 reduced AKT phosphorylation in MKN28 cells.This phenomenon may be attributed to KIAA1324-mediated inhibition of GRP78 activity.However, we cannot exclude the possibilities that KIAA1324 regulates AKT signaling directly or through interaction with other regulators of AKT.Therefore, further studies to explore the possibilities will support tumor suppressive role of KIAA1324 as a negative regulator of AKT. The role of KIAA1324 in cancer has been evaluated only in endometrial, pancreatic, and ovarian cancer to date (21)(22)(23).In type I endometrial cancer, KIAA1324 expression is higher at early stage than that of benign tumors, but reduced in high grade and stage endometrial carcinoma.In addition, KIAA1324 is downregulated in type II endometrial cancer, which is more aggressive than type I.In pancreatic cancer, KIAA1324 is also highly expressed in early-stage tumor, but its expression is decreased in advanced cancer.High KIAA1324 expression in endometrial and pancreatic carcinoma is correlated with favorable prognosis in cancer patients.However, in high-grade serous carcinoma of the ovary/peritoneum, high expression of ERa and KIAA1324 is associated with poor survival in cancer patients.This indicates that KIAA1324 may play different roles in various types of cancers.In our study, we provide evidence that supports a tumor suppressive role of KIAA1324 in gastric cancer through induction of apoptosis.Our study may lead to further investigation of the function of KIAA1324 in other cancers. In conclusion, our study demonstrated that KIAA1324 was epigenetically downregulated in gastric cancer and positively correlated with prognosis of gastric cancer patients.We also revealed that KIAA1324 suppressed growth of gastric cancer cells and tumors by inhibiting the oncogenic activity of GRP78.Taken together, we suggest KIAA1324 as a novel gastric tumor suppressor and provide a new insight for the application of KIAA1324 in the diagnosis and treatment of gastric cancer. Figure 1 . Figure 1.KIAA1324 was downregulated in gastric cancer.A, transcriptome sequencing analysis of mRNA level of KIAA genes in paired tissue samples from 16 gastric cancer patients and 18 gastric cancer cell lines.B and C, KIAA1324 mRNA expression levels in paired tissue samples from 50 gastric cancer patients were examined using qRT-PCR (B) and comparative analysis (C) of the paired tissues was performed.D, KIAA1324 expression in gastric cancer patients was examined from public microarray data (GSE13861; P ¼ 0.0002).E, epigenetic modulation assay was conducted using the inhibitors, decitabine (Dec) and MS-275.KIAA1324 mRNA levels in MKN1, MKN28, and SNU638 cells were assessed using RT-PCR two days after treatment with decitabine and MS-275.Student t tests were performed for statistical analysis ( ÃÃÃ , P < 0.0005).Error bars indicate SD. Figure 2 . Figure 2. Decreased KIAA1324 expression was correlated with poor prognosis in 428 gastric cancer patients.A, representative immunostaining images of gastric cancer tissues classified according to KIAA1324 expression (negative, weak, moderate, or strong).B and C, cumulative survival rate (P < 0.001; B) and clincopathologic features (C) of 428 patients who were categorized into the four groups were investigated using tissue microarray and patient information.The Kaplan-Meier method and Pearson c 2 tests were used for survival analysis and statistical analyses, respectively. Figure 3 . Figure 3.KIAA1324 decreased the in vivo tumorigenic activity of gastric cancer cells.A, MKN28 cells harboring tetracycline-inducible (tet-on) luciferase (Luc) or KIAA1324 were treated with 1 mg/mL doxycycline for 24 hours before subcutaneous injection into 6 mice per group.After injection, mice were fed 2 mg/mL doxycycline (dox) in 10% sucrose water every two days until the time of harvest.Tumor size was measured at indicated time.B, representative images of harvested tumors.Size (C) and weight (D) analyses of the harvested tumors.E, MKN28 cells harboring tet-on Luc or KIAA1324 were injected subcutaneously in 10 mice per group.Three weeks after injection, mice were fed 2 mg/mL doxycycline in 10% sucrose water every two days until the time of harvest.F, representative images of harvested tumors.Size (G) and weight (H) of the harvested tumors were analyzed.I, KIAA1324 expression in tumors was examined using RT-PCR.Ã , P < 0.005; ÃÃ , P < 0.001; ÃÃÃ , P < 0.0005; n.s., not significant. Figure 4 . Figure 4.KIAA1324 inhibited proliferation, invasiveness, and drug resistance of gastric cancer cells.A, doxycycline-induced KIAA1324 expression in MKN28 and AGS cells harboring tet-on KIAA1324 was verified by immunoblotting.Proliferation of MKN28 and AGS cells harboring tet-on Luc or KIAA1324 was evaluated daily in the presence or absence of 1 mg/mL doxycycline.B, representative images of methylene blue-stained colonies of MKN28 and AGS cells expressing Luc or KIAA1324.Relative colony forming unit (CFU) was calculated by dividing the colony number of doxycycline-untreated cells by that of doxycycline-treated cells.C, soft agar colony forming assay was performed to investigate the effect of KIAA1324 on anchorage-independent colony formation of MKN28.Migration (D) and invasion (E) abilities of MKN28 and AGS cells expressing Luc or KIAA1324 were measured using Transwell migration and invasion assay, respectively.The relative migration or invasion rates were calculated by dividing cell number of doxycycline-untreated cells divided by that of doxycycline-treated cells.F, cell viability of MKN28 and AGS cells harboring tet-on KIAA1324 was measured 24 hours after treatment with 25 mmol/L cisplatin or 25 mmol/L etoposide in the absence or presence of 1 mg/mL doxycycline.G, KIAA1324 knockdown in SNU16 cells expressing KIAA1324 shRNA was examined using qRT-PCR.H, growth of SNU16 cells expressing control or KIAA1324 shRNA was measured daily.Cell viability (I) and caspase-3 activation (J) of SNU16 cells expressing control or KIAA1324 shRNA were evaluated 24 hours after treatment with 25 mmol/L cisplatin by cell counts and immunoblotting, respectively.Ã , P < 0.005; ÃÃÃ , P < 0.0005. Figure 4.KIAA1324 inhibited proliferation, invasiveness, and drug resistance of gastric cancer cells.A, doxycycline-induced KIAA1324 expression in MKN28 and AGS cells harboring tet-on KIAA1324 was verified by immunoblotting.Proliferation of MKN28 and AGS cells harboring tet-on Luc or KIAA1324 was evaluated daily in the presence or absence of 1 mg/mL doxycycline.B, representative images of methylene blue-stained colonies of MKN28 and AGS cells expressing Luc or KIAA1324.Relative colony forming unit (CFU) was calculated by dividing the colony number of doxycycline-untreated cells by that of doxycycline-treated cells.C, soft agar colony forming assay was performed to investigate the effect of KIAA1324 on anchorage-independent colony formation of MKN28.Migration (D) and invasion (E) abilities of MKN28 and AGS cells expressing Luc or KIAA1324 were measured using Transwell migration and invasion assay, respectively.The relative migration or invasion rates were calculated by dividing cell number of doxycycline-untreated cells divided by that of doxycycline-treated cells.F, cell viability of MKN28 and AGS cells harboring tet-on KIAA1324 was measured 24 hours after treatment with 25 mmol/L cisplatin or 25 mmol/L etoposide in the absence or presence of 1 mg/mL doxycycline.G, KIAA1324 knockdown in SNU16 cells expressing KIAA1324 shRNA was examined using qRT-PCR.H, growth of SNU16 cells expressing control or KIAA1324 shRNA was measured daily.Cell viability (I) and caspase-3 activation (J) of SNU16 cells expressing control or KIAA1324 shRNA were evaluated 24 hours after treatment with 25 mmol/L cisplatin by cell counts and immunoblotting, respectively.Ã , P < 0.005; ÃÃÃ , P < 0.0005. Figure 5 . Figure 5.KIAA1324 induced apoptosis of gastric cancer cells.A, KIAA1324 expression was induced in MKN28 and AGS cells harboring tet-on KIAA1324 by treatment with 1 mg/mL doxycycline for 36 hours.Flow cytometry analysis was performed using cells stained with Annexin V-FITC and propidium iodide (PI).The Annexin V-positive and PI-negative population indicates the early apoptotic cells.The right panel shows quantification of early apoptotic cell population from triplicate samples.B, TUNEL assay was conducted using MKN28 cells harboring tet-on KIAA1324 with or without 1 mg/mL doxycycline for 48 hours.C, cleavage of caspase-3 and PARP in MKN28-and AGS-expressing KIAA1324 were examined by immunoblotting.D, mRNA levels of BAX, BIM, KIAA1324, and GAPDH in MKN28 and AGS cells harboring tet-on KIAA1324 were evaluated by RT-PCR at the indicated times after treatment with 1 mg/mL doxycycline.Ã , P < 0.02; ÃÃ , P < 0.003. Figure 6 . Figure 6.The KIAA1324 transmembrane domain was required for the induction of apoptosis.A, a schematic diagram of full-length KIAA1324 (WT) and the transmembrane domain (TM) deletion constructs.B, expression of the TM deletion construct (KIAA1324DTM) in MKN28 and AGS cells harboring tet-on KIAA1324DTM was verified by immunoblotting.C, immunofluorescence assays show cellular localization of HA-tagged KIAA1324 WT and DTM in MKN28 and AGS cells.Nuclei were stained with DAPI.Scale bar, 20 mm.D, flow cytometry analysis was performed in MKN28 and AGS cells harboring tet-on KIAA1324DTM using Annexin V-FITC and propidium iodide (PI) 36 hours after treatment with 1 mg/mL doxycycline.The Annexin V-positive and PI-negative population indicates the early apoptotic cells.The right panel shows quantification of early apoptotic cell population from triplicate samples.E, caspase-3 cleavage in MKN28 cells expressing KIAA1324 WT and DTM was examined by immunoblotting. Figure 7 . Figure 7.KIAA1324 inhibited antiapoptotic activity of GRP78.A, a scheme for the identification of KIAA1324-binding partners.B, twenty-four hours after treatment with 1 mg/mL doxycycline in MKN28 cells harboring tet-on KIAA1324, immunoprecipitation assay was performed using the anti-HA antibody to verify the interaction between HA-tagged KIAA1324 and GRP78.C, immunofluorescence assays were conducted using anti-HA and anti-GRP78 antibodies to detect colocalization of KIAA1324 and GRP78.Nuclei were stained with DAPI.D, a schematic diagram of the GRP78 domain deletion constructs.SB, substrate-binding domain.E and F, interactions of the transfected HA-tagged KIAA1324 with GFP-tagged GRP78 WT and domain deletion mutants were assessed using immunoprecipitation assay in 293T cells.G, a schematic diagram of KIAA1324 domain deletion mutants.H, immunoprecipitation assay was performed by transfecting GRP78 without tag and HA-tagged KIAA1324 WT and domain deletion mutants in 293T cells.I, the interaction between GRP78 and caspase-7 was evaluated in MKN28 harboring tet-on KIAA1324 WT, DTM, DC, or N mutant using immunoprecipitation assay 24 hours after treatment with 1 mg/mL doxycycline.J, phosphorylation of AKT was assessed in MKN28 cells expressing Luc or KIAA1324 by immunoblotting.
2018-04-03T04:08:33.164Z
2015-08-01T00:00:00.000
{ "year": 2015, "sha1": "5c925e6364369553d7dc9e711774664aa427dbb0", "oa_license": "CCBY", "oa_url": "https://aacr.figshare.com/ndownloader/files/39852201", "oa_status": "GREEN", "pdf_src": "Grobid", "pdf_hash": "798966b5615283a616c816ca5b37c76dd61a7aaa", "s2fieldsofstudy": [ "Biology", "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
235249706
pes2o/s2orc
v3-fos-license
Knowledge Mapping of Dietary Factors of Metabolic Syndrome Research: Hotspots, Knowledge Structure, and Theme Trends Background: The global incidence of metabolic syndrome (MetS) is continuously increasing, making it a potential worldwide public health concern. Research on dietary factors related to MetS has attracted considerable attention in the recent decades. However, the research hotspots, knowledge structure, and theme trends for the dietary factors associated with MetS remain unknown, and have not yet been systematically mapped. This study aimed to review the research status of diet as a risk factor for MetS through bibliometric methods. Bibliometric analysis was conducted using the Web of Science database. Research hotspots were identified using biclustering analysis with the gCLUTO software, and knowledge structure was explored via social network analysis using the Ucinet software. Theme trends were investigated using evolutionary analysis with the SciMAT software. In total, 1,305 papers were analyzed. The research output on the dietary factors associated with MetS increased steadily. The research scope was gradually expanding and diverse. Overall, eight research hot spots, four key dietary nodes, and four motor themes on the dietary factors associated with MetS were identified. Fatty acids, dietary fiber, and polyphenols have been the focus of research in this field over the years. Evolutionary analysis showed that fish oil and vitamin C were well-developed research foci recently. Prebiotics was recognized as an emerging theme with certain developmental potential. These findings provide a better understanding of the research status of the dietary factors associated with MetS and a reference for future investigations. INTRODUCTION Metabolic syndrome (MetS) is a multicomponent disorder comprising at least three metabolic parameters including increased waist circumference, low highdensity lipoprotein (HDL) cholesterol level, impaired fasting glucose level, elevated blood pressure, and elevated triacylglycerol levels. MetS is associated with a high risk of both type 2 diabetes and cardiovascular disease (CVD) (1)(2)(3). The increasing global incidence of MetS makes it a public health concern. National Health and Nutrition Examination Survey data from 2011 to 2016 show that ∼34.7% of US adults have MetS (4). In Europe, the prevalence of MetS has reached 24.3% (5). A systematic review found a pooled prevalence rate of 24.5% among subjects aged 15 years and older in China (6). The occurrence of MetS is related to both environmental and genetic factors, and dietary factors play a major role in the cause and management of the syndrome (7,8). Evidence has demonstrated that diet and exercise are effective and complementary in the treatment of MetS and its underlying components (9). Accordingly, there have been numerous studies on the dietary factors associated with MetS. Foods, such as fruits, vegetables, whole-grains, soybean, tea, milk, and lipids, and vitamin supplements have been investigated for their potential influence on the MetS (10)(11)(12)(13)(14)(15)(16). Several reviews have summarized the growing evidence of the role of dietary patterns, consumption of fat, whole-grains, quality and quantity of carbohydrates, moderate alcohol consumption, etc. in the development and prevention of MetS (17)(18)(19). Based on the analysis of the association of dietary factors with MetS, an extensive range of strategies to reduce the occurrence of MetS have been suggested, including regular physical activity (20), limiting the intake of total carbohydrate (21), total fat, saturated fatty acid (22), red meat (23), etc.; and increasing dietary ingestion of fruits, vegetables, fish, polyphenols (24), monounsaturated fatty acids (25), etc. However, no study has provided a comprehensive overview on the current trends, primary areas of emphasis, and changes in research themes on the dietary factors associated with MetS. Bibliometric analysis and visualization analysis are wellestablished research methods in information science, which are widely used in many scientific fields [e.g., neural stem cells research (26), environmental health (27), and production management (28)]. Recent developments in bibliometrics analysis software tools have made scientific mapping and quantitative analysis more accessible. Given the significant progress in understanding the dietary factors related to MetS, quantitative and qualitative evaluation of scientific achievements will help to understand the development status, research interests, and current trends in this field. Therefore, the primary purpose of this study was to map the hotspots, knowledge structure, and theme trends of this field in the past two decades (2000-2020), using the biclustering analysis of word co-occurrence with gCLUTO software, social network analysis of highly frequent keywords and highly cited papers with Ucinet software, and theme evolutionary analysis of research themes with SciMAT software. Further, this study explored the factors behind the hot research topics and theme evolution that have propelled the recent progress in this field, and points out some possible research directions in the future. These findings can help researchers keep up with the relevant developments to understand this field better and provide some hints for researchers when launching new projects. Data Resource and Search Strategy The Web of Science (WoS) database was searched for relevant studies on the dietary factors associated with MetS from 2000 to 2020, with no restrictions on language. The following search strategy was used: TS = ("metabolic syndrome" OR MetS OR "Insulin Resistance Syndrome X" OR "Metabolic X Syndrome" OR "Metabolic Cardiovascular Syndrome" OR "Reaven Syndrome X" OR "Dysmetabolic Syndrome X") AND TS = (diet OR dietetic OR dietary OR food OR foods). The search was carried out on July 22, 2020. Included studies met the following criteria: (1) human studies; (2) article or review; (3) focused on the study of dietary factors associated with MetS. All 22,392 records retrieved from Web of Science were imported into NoteExpress version 3.2.0.7535 to process the papers (29). The basic information from each paper, such as title, author, literature source, abstract, keywords, publishing date, et al., was recognized. Two reviewers (X-S L and Y-X C) independently evaluated and studied each of the papers according to the inclusion criteria. Any disagreements were discussed and resolved with a third reviewer (XC) until a consensus was reached. The agreement rate between them was 0.90, indicating a high consistency. Data Extraction and Bibliographic Matrix Building Bibliographic Item Co-Occurrence Matrix Builder (BICOMB; version 2.0) (30) was used to extract data and for matrix construction. This software can generate a co-occurrence matrix that can be used as basic data for subsequent analyses. In this study, we mainly use BICOMB to construct a term-article matrix, highly frequent keywords, and highly cited papers co-occurrence matrixes, and data cleaning was carried out before processing. The singular and plural forms of the keywords were merged, and spelling errors were also checked manually. The keywords with different spelling but similar meaning were combined, and those without relevant value to the study were deleted. Thereafter, a binary matrix with highly frequent keywords as the rows and source papers as the columns, was built through BICOMB for further analysis. In addition, the amount of highly frequent keywords was defined according to the threshold value (T), calculated as: (2)T= (1+ √ 1 + 8i)/2 (31), where i represents the number of keywords with a frequency of 1. Highly frequent keywords were defined as those occurring at least nine times according to the calculated T value. Hirsch proposed the h-index to quantify the scientific research output and academic level of scientists (32). Similarly, aco-citationanalysis was used to select highly cited papers to reflect the knowledge base of a certain research field (33), so the h-index can also reflect the contribution value of highly cited papers to all of the references in a given field. The h-index was determined as follows: the references in the list were sorted in descending order of citation frequency; n was the sequence number of each term; under the premise that the number of citations of the nth paper was less than or equal to n, the h-index was the maximum n value (32). Biclustering Analysis The gCLUTO software version 1.0 (Graphical CLUstering Toolkit, a graphical front-end for the CLUTO data clustering library, developed by Rasmussen, Newman, and Karypis from University of Minnesota) (34) was used for biclustering analysis. The repeated bisection was selected as the clustering method, I2 as the optimization function, and cosine function as the similarity coefficient, and the default values were selected for the remaining parameters. To obtain the best number of clusters, we used different numbers of clusters for biclustering. The biclustering result of the matrix of highly frequent keywordssource papers was shown through matrix visualization. The clustering effect was verified with mountain visualization. The research hotspots of dietary factors associated with MetS were identified by analyzing the semantic relationship among the keywords and the research focus of representative papers. Social Network Analysis Social network analysis (SNA) is a research tool that has been widely applied in sociology, psychology, economics, and management in recent years. This method is used to analyze the co-authorship network, citation network, and co-word network in scientific research (35). In our study, SNA was used to analyze the co-keywords and co-citation networks on the dietary factors associated with MetS. Using the highly frequent keywords and highly cited papers co-occurrence matrixes generated by BICOMB, the relationship between them were visualized with Ucinet software (version 6.186) (36). Keyword co-occurrence means that when two keywords that can express the research topic appear in the same paper, there is a certain internal relationship between the two words, and the higher their frequency, the closer is the relationship (37). Therefore, the frequency of keyword co-occurrence in the same paper can form a co-keywords network composed of these keyword pairs. Similarly, when two papers appear in the reference list of a third paper simultaneously, the two papers become co-cited. Moreover, the more times the two papers are co-cited, the more significant their correlation and the more likely they tend to discuss a similar topic (38). In the visualization network, the keywords (cited papers) were represented by nodes. The connection between keywords (cited papers) was represented by edges, with the co-occurrence frequency shown by the links. The strength of the relationship between the keywords (cited papers) was represented by the thickness of the line. The stronger the relationship between two keywords (cited papers), the thicker the line connecting them. The location of nodes was determined by the centrality index (i.e., degree, betweenness, and closeness). Distributing the nodes in the network clearly presents the knowledge structure on the dietary factors associated with MetS. Theme Evolutionary Analysis We used the Science Mapping Analysis software Tool (SciMAT; version 1.1.04) (39), which is an open source science mapping software tool developed at University of Granada, to describe the thematic and conceptual evolution in this field. According to the amount of literature, the literature was divided into three consecutive periods: 2000-2009, 2010-2014, and 2015-2020. Configuration was performed as follows: words as the unit of analysis (author and source); 2,3,3 as the threshold of data frequency reduction in each period; co-occurrence as the matrix form; 2,3,3 as the threshold of data network reduction in each period; association strength as the similarity measure to normalize the network; and the simple centers algorithm as the clustering algorithm. The bibliometric measures were selected using the h-index, and these measures were calculated using the core document mappers. Jaccard's Index and Inclusion Index were selected as the measures for the longitudinal map (40). The longitudinal analysis was shown using an overlapping map and an evolution map, which helped us detect the evolution of the clusters throughout different periods and to study the transient and new keywords in each period and the keywords shared by two consecutive periods (41). The strategic diagram, a two-dimensional graph composed of the x-and y-axes, is used to describe the internal connection and mutual influence of a certain research field (42). In this study, the x-axis in the diagram represented centrality, which measured the external interaction between the theme clusters and could be understood as the theme's relevance value. Meanwhile, the y-axis represented density, which measured the theme cluster's internal cohesion and could be interpreted as a measure of the theme's development. The detailed process of literature retrieval, study selection, and bibliometric analysis is shown in Figure 1. Growth of the Literature Overall, 1,305 papers were included in the analysis. The distribution of the publication year is shown in Figure 2. For comparison, all papers on MetS in WoS are also mentioned in the figure. Research Hotspots of Dietary Factors Associated With MetS A total of 57 highly frequent keywords, accounting for 37.36% of all keywords, were extracted from the included papers (Supplementary Table 1). Based on the co-occurrence of highly frequent keywords, a matrix with highly frequent keywords as rows and source papers as columns was established. The "1" and "0" in the matrix indicate that the highly frequent keyword was present and absent in the paper, respectively (Supplementary Table 2). The biclustering result of the matrix was shown as matrix visualization and mountain visualization. The visualization matrix illustrated the clustering results of highly frequent keywords and source papers ( Figure 3A). Row clustering indicated the clustering of highly frequent keywords, which were listed on the right side of the matrix. Column clustering represented the source papers. The highly frequent keywords were divided into five clusters ( Figure 3A) (0-4, 5 clusters in total), and the clustering effect was verified using mountain visualization (Figure 3B), in which the volume of the peak was proportional to the number of highly frequent keywords contained in the cluster, and the height was proportional to the similarity within the cluster. The greater the similarity within the cluster, the steeper the mountain. A detailed reading of the highly frequent keywords in each cluster helped summarize the research topics of clustering. Finally, the research hotspots of the dietary factors associated with MetS were identified. In addition to analyzing the semantics of keywords and the contents of the representative papers in each cluster, some clusters were further divided into narrower topics. Each of these topics was summed up as a single hotspot. Overall, eight hotspots were identified: (1) effect of vitamin D intake on MetS (cluster 0), Knowledge Structure In the co-occurrence network, the node located in the center of the network is the most active, having more connections than other nodes and playing an important role in network connectivity. Node importance was evaluated according to three classic indicators of SNA, namely, degree centrality, betweenness centrality, and closeness centrality (43,44). The node size was proportional to the degree centrality of a keyword, with the line thickness representing the co-occurrence frequency. Node color was classified according to the three indicators. The highly frequent co-occurrence keyword network is shown in Figure 4. Table 1 shows the top 20 nodes in the network centrality of the dietary factors associated with MetS. The nodes with a high of degree centrality, betweenness centrality, and closeness centrality were overlapping and repetitive. These nodes included obesity, insulin resistance, cardiovascular risk, inflammation, and diabetes mellitus. They played an important intermediary role in the flow and control of network resources. The key dietary nodes, including fatty acids, polyphenols, dairy food, and dietary fiber, were also screened out using the network evaluation indicators. The centrality of the 57 highly frequent keywords is shown in Supplementary Table 3. The h-index in this field was calculated as 40. Based on this value, the top 40 papers with highest citation frequency were selected as highly cited papers in the list of references (Supplementary Table 4). The co-citation network of 40 highly cited papers well-reflects the structure of the knowledge base in this research field (Supplementary Figure 1). The centrality of the 40 highly cited papers was shown in Supplementary Table 5. To make the network more concise and intuitive, the paper pairs co-occurrence threshold was set to be ≥30 times; then, two network structures were extracted ( Figure 5). As seen in Thematic Overlapping Map The overlapping map shows the change in research themes and the stability of the research fields in the form of data flow. The circles represent the time periods, and the figure in the circle represents the number of themes in the corresponding period. The horizontal arrow between the two circles indicates the continuity of the two periods. The number on the arrow represents the number of themes shared by the two periods, and the value in brackets is the stability index, which was used to measure the overlap between the two periods. The upper incoming arrow represents the number of new themes in a given time period, and the upper outgoing arrow represents the themes presented in this period, but not in the next period. Figure 6 shows the evolution of dietary factors associated with MetS in the past 20 years divided into three periods (from left to right): 2000-2009, 2010-2014, and 2015-2020. The new themes were larger than the declining themes in each period, the total number of themes and the stability index were continuously improved, and the number of themes studied was preserved from the previous to the next period. Strategic Diagrams In the strategic diagrams, four different quadrants could be distinguished based on their positions on the map. The cluster in Quadrant I (upper right) indicates themes that were welldeveloped as motor themes in the network; Quadrant II (upper left), research maturity was high but had marginal importance for the field (i.e., highly developed and isolated themes); Quadrant III (lower left), research maturity was low and in the marginal position (i.e., generally emerging or declining themes); and Quadrant IV (lower right), research was in the central position but not mature and had great developmental potential (i.e., basic and transversal themes) ( Figure 7A) (28,41). The size of a signal node in the diagrams was proportional to the number of documents involved in each cluster. The quantitative data of nodes based on other performance methods are shown in Supplementary Table 6. The motor themes of each period differed, as shown by the strategic diagrams. The motor theme of the first period (2000-2009) was "FATTY-ACIDS" (Figure 7B). In the second period (2010-2014), researchers shifted their focus to "TEA" research ( Figure 7C). In the third period (2015-2020), attention was given to "VITAMIN-C" and "FISH-OIL" (Figure 7D). Thematic Evolution Map In the thematic evolution map shown in Figure 8, the nodes represent the clustering of themes in a certain period, the size of nodes is proportional to the number of documents associated with each cluster, and the thickness of the edges represents the closeness of the two clustering themes. The solid line represents the linked clustering themes sharing the main analysis units (keywords), which indicate that the two themes are persistent and represent the evolution direction of the mainstream. The dotted line indicates the themes sharing elements that were not the main analysis units, representing the evolution direction of tributaries. The thicker the connection of the two theme clusters, the higher their correlation strength and the stronger the evolutionary ability. The isolated node represents the theme that appeared only in a certain period and had no relationship with the theme of the previous and later periods. These isolated nodes reflected, to some extent, the new themes. "FATTY-ACIDS" shared the main keywords with "MEDITERRANEAN-DIET, " "CALCIUM, " and "DIETARY-FIBER." In the 2015-2020 period, the "MEDITERRANEAN-DIET" was promoted to second place from the third place. "MEDITERRANEAN-DIET" was strongly related with "TEA, " as indicated by the thick solid lines. "CALCIUM" continued to evolve into "DAIRY-FOOD, " and "DIETARY-FIBER" The themes of cardiovascular risk, obesity, and Mediterranean diet were persistently investigated throughout the study period. They were located in the center of the network, with high frequency. These findings indicated that these themes not only were closely related to MetS, but also played a crucial mediating role in the diet network of MetS. These results indicate an overall direction in this research field: controlling obesity through healthy diets to reduce MetS and ultimately achieve the overall goal of reducing the risk of CVD. CVD remains the leading cause of morbidity and mortality, causing a significant disease burden worldwide. Each component of the MetS is an independent risk factor for CVD (45). It is well-accepted that obesity is a risk factor for multiple diseases and a key etiological factor in the development of MetS. Diet intervention is the cornerstone of weight loss treatment. Our results reflect that increasingly, studies are focusing on the Mediterranean healthy diet, which has shown a protective effect on MetS. In the evolutionary map, the Mediterranean diet had become the second most common research theme in the most recent period. This suggested that more people are now focusing on their health by adopting a healthy diet to prevent MetS. Furthermore, dietary intervention alone is not an effective way to control obesity. Physical activity is vital for maintaining weight loss achieved through diet. References provide the main source of knowledge for scientific research and exploration (46). Therefore, by analyzing the cooccurrence relationships of highly cited papers from the reference list, the knowledge base of this field can be visualized. From the center of the knowledge base map, it can be seen that the papers on diagnostic criteria and definition of MetS were most commonly co-cited, and these papers had high co-occurrence with studies on the association of CVD with MetS, MetS prevalence, and dietary role on MetS. These topics usually appeared in the introduction of the studies on dietary factors of MetS. Thus, the first knowledge base was also the necessary knowledge background for researchers to launch new projects in this field. The second knowledge base captured from the map was the dietary pattern, especially the Mediterranean diet. In addition, we found that polyphenols were research hotspots and a key dietary node in the knowledge structure network. Tea (polyphenol-rich foods) became the motor theme, with strong evolutionary ability in the second period, and it further evolved into Mediterranean diet (Supplementary Figure 2). This may be attributed to findings that the beneficial effect of the Mediterranean diet against MetS was related to the high content of bioactive substances and polyphenols (47,48). Another knowledge base reflected from the map was the relationship between dairy foods and MetS, which provides an important reference for the further research of milk, yogurt, calcium, vitamin D, potassium, and so on. Furthermore, the evolutionary analysis showed that calcium has become an important research topic in 2010-2014, which was usually studied with vitamin D (Supplementary Figure 3). However, our results showed that calcium appeared in Quadrant IV, which means that calcium was yet to be fully developed in the corresponding period. The effect of dietary calcium on MetS has not been completely clarified. Therefore, the research on the mechanism of the nutrients in dietary patterns and foods, such as polyphenols, calcium, and vitamin D, in the occurrence and development of MetS might influence the future direction of this field. The results of biclustering analysis and SNA showed that fatty acids, dietary fiber, and polyphenols not only were the research hotspots, but also had high centrality in the co-occurring keyword network. We also identified fatty acids as a motor theme with high centrality and influence on other themes in the 2000-2009 diagrams. Significantly, the research of n-3 polyunsaturated fatty acids (n-3 PUFAs) was relatively mature. The fact that fatty acids as study hotspots were quantitatively relevant may be because of the following reasons: (1) fat is an important macronutrient to supply energy, which is crucial for human health, and fatty acids are the main component of fat (49). (2) fatty acids exist widely in different foods and are easily available. (3) high energy fatty foods cost less (4)fatty acid intake is essential for human body. (5) different types of fatty acids have significant roles in the evolution or the prevention of the MetS. The World Health Organization (WHO) recommends limiting the intake of fatty acids to <30% of energy intake (50). In the recent decades, the fat intake of developing countries has increased, while that of developed countries has decreased (51). Further, most of the research is concentrated in western developed countries. Therefore, with the change of eating habits, in-depth study of different types of fatty acids, and the rise of metabonomics research, the study of fatty acids, began to evolve. The evolution process showed that the studies of fatty acids further evolved into studies on dietary fiber and calcium (Supplementary Figure 3). Dietary fiber is generally not digested and absorbed by the stomach and small intestine, but can be fermented by intestinal flora, which plays a vital role in maintaining their homeostasis, the balance of intestinal mucus generation and degradation, and the protection of the intestinal wall structure (52). Based on the development of metagenomic sequencing and metabonomics analysis in the recent years, the research of dietary fiber, intestinal flora, and their relationship with human health has been of primary concern for scientists. Previous studies inferred that dietary fiber may play an important role in the dietary management of MetS (53); however, the mechanism by which fiber exerts a beneficial effect on MetS components has not been well-elucidated. Similarly, based on the strategic diagram, dietary fiber was identified as the basic and transversal theme during 2010-2014, indicating that it was related to MetS, but research on this theme is yet to be fully developed. To our knowledge, it is worthwhile for researchers to combine metabonomics to monitor the regulatory effect of dietary fiber on intestinal flora and intestinal environment, and further discover the mechanism of dietary fiber in MetS for the future. From 2015 to 2020, the motor theme gradually changed from tea to fish oil and vitamin C. The main factor mediating this transformation may be their antioxidant characteristics. Fish oil and omega-3 fatty acids are closely related (Supplementary Figure 4). Careful examination of vitamin C showed that it was closely related to vegetables, fruits, carotenoids, and antioxidants (Supplementary Figure 5). Antioxidants in foods can protect cells and tissues from oxidative damage by inducing endogenous antioxidant defense (54). The concept of nutritional antioxidants originated from the oxidized low density lipoprotein (LDL) theory of atherosclerosis (55). A higher concentration of oxidized LDL was found to be associated with increased risk of MetS (56). Oxidative stress is critical to the initiation and progression of MetS. In the future, the identification of more key oxidative/antioxidant targets and biomarkers will contribute to the treatment of MetS with antioxidants. Iron, zinc, and prebiotics appeared independently in the third period, which indicated that they may be emerging themes. However, as shown in our strategic diagrams, studies on iron and zinc might be traced back to an earlier period. In the recent years, with the development of metabolomics and 16S rRNA technology, the role of intestinal microbes in health and disease has been recognized in alternative and complementary forms of medicine (57), and researchers' interest in prebiotics has extended beyond iron and zinc. In comparison, prebiotics are more likely to be emerging themes in recent years. Prebiotics are dietary components beneficial to bacterial growth and metabolic activities. Supplementation of prebiotics can better regulate the intestinal flora and promote the development of the intestinal microecology. With respect to dietary prebiotics and probiotics, previous studies have shown that they may modify the gut microbiota and hence attenuate the symptoms of MetS (58,59). Intestinal microbiota are an ideal target for MetS management through supplementation with probiotics and prebiotics (60). Due to the remarkable progress of 16S rRNA sequencing technology, there has been a tremendous increase in studies on the composition and diversity of human intestinal microbiota in the past decade. Prebiotics are not absorbed by the intestine, and they can better facilitate the growth of intestinal probiotics and the metabolism of lipid and protein. Metabonomics is a crucial method to detect metabolites (fat, protein, etc.), especially 16S rRNA sequencing, which has substantial advantages in monitoring intestinal microflora and intestinal microbial status (probiotics and prebiotics). Detecting the beneficial or harmful changes of the internal environment by 16S rRNA is helpful to elucidate the pathogenesis of metabolic diseases. Technological innovation has dramatically promoted the research of prebiotics in the field of metabolism. However, data on the function of prebiotics on intestinal microbiota and their relationship with MetS are still inadequate to prove that they can be used in clinical practice to prevent and manage MetS. This highlights the need for further research on the potential benefit of prebiotics for the prevention and treatment of MetS. This study has some limitations. First, data were only collected from a single database (i.e., WoS), and thus, other studies from other databases were not analyzed. Second, we only included articles and reviews; as such, other research hotspots may have been missed. Third, biclustering analysis and SNA were performed based on highly frequent keywords. This led to some new, but less frequent, topics being ignored. Last, visualization tools, such as gCLUTO, Ucinet, and SciMAT, can only process data from one database at a time, which may lead to biased results. Despite these limitations, to our best knowledge, this is the first study to apply biclustering analysis, SNA, and evolutionary analysis to comprehensively evaluate the relationship between dietary factors and MetS. This allowed us to determine the overall research hotspots and knowledge structure from a statistical point of view and the dynamic changes in research themes. In conclusion, fatty acids, dietary fiber, and polyphenols are the main focus of research on the dietary factors associated with MetS. To further improve research, scientists should pay more attention on vitamin C and fish oil. Prebiotics was recognized as an emerging theme with certain developmental potential. These findings provide a better understanding of the research hotspots on the dietary factors associated with MetS, and the findings can be used as basis for future investigations on MetS. The application of bibliometrics should be expanded in further studies on MetS. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. AUTHOR CONTRIBUTIONS XC: reviewing the results for study inclusion, performing most assays, and writing-original draft preparation. Q-JW and QC: reviewing and revising of the manuscript. T-NZ: writing-reviewing and editing. X-SL and Y-XC: performing the literature search and reviewing the results for study inclusion. Y-HZ: contributed to the approval of the final version of the manuscript. All authors contributed to the article and approved the submitted version. FUNDING This study was supported by the National Key R&D Program of China (2017YFC0907400 to Y-HZ).
2021-05-31T13:28:40.889Z
2021-05-31T00:00:00.000
{ "year": 2021, "sha1": "3c1f0b02ea96baf89aa11a8b8dc4df7aa0fa0f92", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnut.2021.655533/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3c1f0b02ea96baf89aa11a8b8dc4df7aa0fa0f92", "s2fieldsofstudy": [ "Medicine", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
139314009
pes2o/s2orc
v3-fos-license
Calculation of the filtration problem by finite differences methods . The filtration problem of a suspension in a porous medium is relevant for the construction industry. In the design of hydraulic structures, construction of waterproof walls in the ground, grouting the loose soil, it is necessary to calculate the transfer and deposition of solid particles by the fluid flow. A one-dimensional filtration problem of a monodisperse suspension in a porous medium with a size-exclusion capture mechanism is considered. It is assumed that as the deposit grows, the porosity and admissible flow of particles through the porous medium change. The solution of the initial filtration model and the equivalent equations are calculated. For the numerical calculation of the problem, both standard first-order finite difference formulas and more accurate second-order schemes were used. The obtained solutions are compared with the results given by the TVD-scheme. Introduction The problems associated with the transport of small solid particles by the fluid flow and the deposition of particles in the pores of a porous medium are relevant for many technologies and industries. Wastewater and drinking water treatment, usage of industrial filters, construction of waterproof partitions for hazardous waste storage facilities, and grouting the loose soil by pumping a watery grout require solution of the filtration problems of a suspension in a porous medium [1,2]. The deep bed filtration of the monodisperse suspension in a homogeneous porous medium is considered in the paper. Depending on the properties of the suspension and the porous medium, mechanical interaction, diffusion, viscosity, electrostatic and gravitational forces can play an important role in particles capture [3][4][5]. If the sizes of particles and pores are of the same order, then in many cases the mechanical-geometric mechanism of particle capture called the size-exclusion prevails: solid particles pass freely through largediameter pores and get stuck in pores whose diameter is smaller than the particle size [6]. A traditional mathematical model determining the one-dimensional filtration of an incompressible monodisperse suspension in a porous medium with a size-exclusion mechanism of particles capture relates the concentrations of suspended and retained particles to a system of two first-order partial differential equations. The equation of mass balance of suspended and retained particles is an analog of the continuity equation. The kinetic equation determines the increase of deposit concentration [7]. More complex models consider the change in the properties of the porous medium when deposit accumulates [8]. A lot of papers are devoted to the solution of the filtration problems. In a number of important cases it is possible to obtain exact and asymptotic solutions [7,9,10]. The methods of experimental and numerical modeling are developing actively [11][12][13]. The standard finite difference method for numerical solution of filtration problems is the replacement of partial derivatives by difference relations. Replacing differential equations by difference equations allows to construct an explicit difference scheme with first-order approximation. The disadvantages of this scheme are a small step in time for the fulfillment of the convergence condition and the low accuracy of the numerical solution, especially near the concentrations front where the solution is discontinuous. The use of more complex counter-current schemes and rapidly converging Lax-Vendroff schemes leads to unjustified smoothing or non-physical oscillations of the solution near the line of discontinuity [14]. These difference schemes, as well as TVD-schemes (Total Variation Diminishing scheme) in application to filtration problems have the first order of approximation [15]. To increase the accuracy of numerical calculations, an explicit difference scheme of the second order is constructed in this paper. It is used to solve both the initial system of differential equations of the filtration problem, and the equivalent first-order equation obtained in [10]. These numerical solutions are compared with the solutions based on the standard first-order schemes and the TVD scheme. Mathematical model A dimensionless model of filtration that considers the change of porous medium properties with the deposit accumulation is set in the domain {0 1, 0} xt      by a system of mass transfer and deposit growth equations () Here unknowns ( , ), ( , ) C x t S x t are volumetric concentrations of suspended and retained particles, are smooth and positive for 0  S . For the uniqueness of the solution of the system (1), (2), the boundary and initial conditions are set The condition (3) corresponds to the injection of a suspension with a constant unit concentration of suspended particles; the conditions (4) and (5) mean that at the initial moment the porous medium does not contain any suspended and retained particles. It follows from the inconsistency of conditions (3) and (4) at the origin that the solution ( , ) C x t has a discontinuity on the concentration front the solution ( , ) S x t is continuous in the domain . The concentration front moves in a porous medium with constant velocity In the domain 0  the solution is zero: 0; 0 CS  ; in S  the solution is positive: The analytic solution of the problem (1)-(5) at the inlet of porous medium 0 x  was obtained in [10] in an implicit form The classical filtration model with unchanging properties of a porous medium ( ) 1, ( ) 1 g S f S  and a linear filtration coefficient with conditions (3)-(5) has an exact solution in the domain Solution (7), (9) is used for approbation of the numerical solution in S  . The problem (1)-(5) can be reduced to a single 1-order equation for the function ( , ) with the condition (7). For the known ( , ) S x t the solution ( , ) C x t is determined from equation (2). Finite difference schemes of the filtration problem The replacement of partial derivatives by the simplest finite differences , , ; ; specifies a standard finite difference scheme of the first-order approximation. The relationship between step  in time and step h in coordinate x is chosen using the Courant convergence condition To construct the original finite difference schemes of the second order, the integration of equations (1), (2) and (9) over a rectangular cell of grid was used. It was found that when the integrals are approximated by the second-order formulas for the standard model (8), the trapezoid formula gives a more accurate result than the method of rectangles. Therefore, for construction of finite difference schemes for equations (1), (2), the trapezoid formula was used. For example, The predictors at the node ( , ) ij xt are , , approximating the solution with accuracy 2 () Oh . The trapezium method is also used for numerical solution of equation (10). The solution ( , ) C x t is obtained from equation (2) In Figures 1 and 2 the graphs of suspended and retained particles concentrations at the porous medium outlet are shown. The lines correspond to the solutions of five different finite difference schemes of the filtration problem: first-order scheme for the system (1), (2) (solid line) and the equations (10), (12) (dash-dotted line); second-order scheme for the system (1), (2) (dash line) and the equations (10) In Fig. 1 a), 2 a) with low resolution all five graphs merge into one line. Fig. 1 b), c) and 2 b), c) with high resolution show the mutual arrangement of the graphs. Conclusion The complex filtration model (1)-(5) does not have a simple analytical solution, so the problem is calculated numerically. In this paper, the comparison of the solutions at the porous medium outlet obtained by the finite difference methods of various accuracy is performed. It is shown that all solutions are close to each other, but the closest solutions are obtained by the second-order schemes.
2019-04-30T13:08:46.396Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "aae19fc215bea2beaf0e4c5b98b1694f638b5ad9", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2018/110/matecconf_ipicse2018_04021.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "fd28fcc991ae7279aea3a4e689244e1c9272e1ca", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Materials Science" ] }
237913638
pes2o/s2orc
v3-fos-license
Grown and Characterization of ZnO Aligned Nanorod Arrays for Sensor Applications : ZnO nanorods are promising materials for many applications, in particular for UV detectors. In the present paper, the properties of high crystal quality individual ZnO nanorods and nanorod arrays grown by the self-catalytic CVD method have been investigated to assess their possible applicationsfor UV photodetectors. X-ray diffraction, Raman spectroscopy and cathodoluminescence investigations demonstrate the high quality of nanorods. The nanorod resistivity and carrier concentration in dark is estimated. The transient photocurrent response of both as grown and annealed at 550 ◦ C nanorod array under UV illumination pulses is studied. It is shown that annealing increases the sensitivity and decreases the responsivity that is explained by oxygen out-diffusion and the formation of near surface layer enriched with oxygen vacancies. Oxygen vacancy formation due to annealing is confirmed by an increase of green emission band intensity. Introduction Zinc oxide, a well-known direct bandgap II-VI semiconductor, is a material with large exciton binding energy (60 meV) and a wide bandgap (E g~3 .37 eV) [1], suitable for short wavelength optoelectronic applications [2]. It is a promising material for fabricating photonic [2,3], optical [4][5][6], electronic [7,8] and photovoltaic devices [9,10]. Additionally, ZnO is transparent to visible light and can be made highly conductive by doping [11,12]. The non-centrosymmetry in ZnO wurtzite structure and the polarity developed along the c axis make this material piezoelectric, which, in combination with its large electromechanical coupling, results in strong piezoelectric and pyroelectric properties useful in piezoelectric sensors [13,14] and nanogenerators [15,16]. The ferromagnetism of doped [17] and undoped ZnO [18] makes it a promising material for spintronics. Additional useful advantages of zinc oxide include low toxicity, chemical stability, electrochemical activity, making it a promising material for biosensors, and biomedical applications [19,20]. In the strict sense, zinc is not a transition metal as it has d 10 s 2 electronic configuration with a completely filled d-shell and can be classified as post-transition metal. However, it exhibits many properties similar to those of transition metals and sometimes it is often convenient to include this element in a discussion of the transition elements. In zinc oxide, s electrons are strongly pulled by oxygen and consequently the structural, physical and chemical properties are mostly determined by the d electrons, in certain sense similar to transition metal oxides. Miniaturization in electronics and development of different photonic devices requires a new generation of cheap, energy-efficient, nano-or submicron sized semiconductor lasers. One-dimensional (1D) ZnO nanocrystals have a perfect structure and a developed surface, which gives them certain advantages when used in the abovementioned practical applications [5,16]. In this regard, the ordered arrays of zinc oxide nanorods (NR) obtained by various methods are of great interest to researchers [21,22]. To obtain ZnO nanorods, various methods were used, such as hydrothermal synthesis, solvothermal method, sol-gel method, chemical vapor deposition, organometallic chemical vapor deposition, magnetron sputtering, laser ablation, etc. [2,21,23]. Most of these processes are carried out at relatively low temperatures that lead to a decrease in the cost of the structures but increases inthe number of defects. Therefore, despite the abundance of methods, the synthesis of highquality ZnO NRs with a perfect structure is not an easy task. The properties of ZnO NRs can vary greatly, even within the same method of synthesis. In [24] the original selfcatalytic CVD procedure for growing zinc oxide nanorod arrays was developed. One of the advantages of this method is the possibility to deposit ordered arrays of high-quality singlecrystal ZnO nanorods both on silicon substrates of various orientations and on inexpensive transparent glass substrates [25], which makes it attractive for practical applications. The method was developed to grown high quality nanorods for use as a laser medium. It was interesting to study the performance of such nanorods as UV sensors. In present work, arrays of ZnO nanorods were grown by the self-catalytic CVD procedure. The properties of nanorods were investigated to assess their possible applications for UV photodetectors. The performance of the UV sensors depends both on the state of the zinc oxide surface and on the concentration of intrinsic defects, which strongly depends on synthesis conditions and/or by post-synthesis treatment. Materials and Methods The NR arrays were synthesized in a flow type two-zone quartz reactor in accordance with the previously developed method [24][25][26]. A charge of granulated high-purity (99.99%) zinc was placed in the first (evaporation) zone, and substrates were placed in the second (synthesis) zone. Si {100}, fused silica and glass wafers were used as substrates. Synthesis was carried out at a reduced pressure under the conditions of continuous evacuation. The pressure in the reactor was kept at a level of 10 3 Pa. The process was carried out at temperatures of 610 • C and 550 • C in the evaporation and synthesis zones, respectively. During the process, zinc was evaporated in the first zone, from which zinc vapor arrived at the second cooler zone, where it was partially condensed, forming an array of zinc nanodrops on the substrates, which are sufficiently uniform in size. These Zn drops serve as a catalyst, in contrast to the usual CVD process in which a noble metal (gold) is used as a catalyst. Further, when a high-purity oxygen-argon mixture (15% O 2 ) enters the growth zone, its chemical interaction with liquid zinc occurs. Due to the reaction of zinc with oxygen, zinc oxide nanocrystals were deposited onto the substrates under the zinc droplets. Further, when oxygen enters the growth zone, its chemical interaction with liquid zinc occurs. The formed oxide is dissolved in a drop of zinc to form a supersaturated solution, from which solid ZnO crystallizes at the metal/substrate interface. The gas mixture was supplied to the reactor with a rate of 6 L/h. The synthesis was carried out for 20-30 min with the zinc consumption of 12-15 g/h. The diameter of the growing NRs corresponds to the diameter of the liquid zinc drop. Using this procedure, high-quality ZnO NR arrays were grown, the diameter and length of which can be varied by changing the synthesis conditions (duration, reagent consumption, etc.). The above procedure allows us to synthesize arrays of well faceted vertically aligned single-crystal NRs with a 150-250 nm diameter and up to 10 µm length. To obtain individual nanorods they were separated from the substrate by ultrasonic treatment and transferred on a silicon substrate. The cathodoluminescence (CL) investigations of arrays and individual nanorods of different size and geometry were carried out in the JSM 6490 (JEOL) SEM equipped with the MonoCL3 system (Gatan) and with the Hamamatsu photomultiplier as a detector. The investigations were carried out in the temperature range from 90 to 300 K. In the most cases the beam energy in the range from 10 to 20 kV and beam current of 0.1-1 nA were used for the CL measurements. The crystallinity of NRs was examined by X-ray diffractometry (Θ-2Θ) in the scheme of a two-crystal diffractometer on a laboratory BRUKER D8 Discover X-ray source with a rotating copper anode (CuKα radiation, λ = 1.54Ǻ). The Raman spectra of the samples were studied using a Bruker Sentera RAMAN microscope under the excitation by a 532 nm solid-state laser. To study the photoresponse of nanorod arrays vertically oriented nanorods were grown on quartz substrates coated with a thin polycrystalline ZnO film. The diameter and density of nanorods as estimated by the SEM was of 150 nm and 4 × 10 8 cm −2 , respectively. For current measurements a quartz plate with two indium contacts deposited in the form of strips with a length of 2 mm and a distance of 3 mm between them was pressed to the nanorod arrays. The transient photocurrent response of samples was measured at a bias voltage of 30 V using UV lamp with the emission maximum of about 370 nm and a power of 4 W. Results and Discussion The typical SEM images of studied NRs are presented in Figure 1. It is seen that the most of NRs are aligned perpendicular to the substrate. The diffractograms of the ordered arrays of ZnO NRs show only the (002) and (004) reflections (Figure 2), which confirms the growth direction of along the c-axis. The similar patterns were observed on arrays grown on both single crystal silicon and quartz substrates. The full width at half maximum of (002) peak for these arrays is of 10-12 , which indicates that the nanorods have a perfect crystal structure and are well aligned. The crystallinity of NRs was examined by X-ray diffractometry (Θ-2Θ) in the scheme of a two-crystal diffractometer on a laboratory BRUKER D8 Discover X-ray source with a rotating copper anode (CuKα radiation, λ=1.54 Ǻ). The Raman spectra of the samples were studied using a Bruker Sentera RAMAN microscope under the excitation by a 532 nm solid-state laser. To study the photoresponse of nanorod arrays vertically oriented nanorods were grown on quartz substrates coated with a thin polycrystalline ZnO film. The diameter and density of nanorods as estimated by the SEM was of 150 nm and 4 × 10 8 cm −2 , respectively. For current measurements a quartz plate with two indium contacts deposited in the form of strips with a length of 2 mm and a distance of 3 mm between them was pressed to the nanorod arrays. The transient photocurrent response of samples was measured at a bias voltage of 30 V using UV lamp with the emission maximum of about 370 nm and a power of 4 W. Results and Discussion The typical SEM images of studied NRs are presented in Figure 1. It is seen that the most of NRs are aligned perpendicular to the substrate. The diffractograms of the ordered arrays of ZnO NRs show only the (002) and (004) reflections ( Figure 2), which confirms the growth direction of along the c-axis. The similar patterns were observed on arrays grown on both single crystal silicon and quartz substrates. The full width at half maximum of (002) peak for these arrays is of 10-12′, which indicates that the nanorods have a perfect crystal structure and are well aligned. The high crystal quality of NRs was also confirmed by the Raman and CL spectra. The Raman spectra demonstrate the high crystal quality of the sample wurtzite structure ( Figure 3). The low-frequency E2 (low) mode is mainly associated with nonpolar oscillations of the heavier Zn sublattice, whereas the high-frequency E2 (high) mode is mainly associated with the displacement of lighter oxygen atoms. The modes A1 and E1 are split into longitudinal (LO) and transverse (TO) optical components. With the exception of LO modes, all Raman active phonon modes are clearly identified in the measured spectrum. The full width at half maximum and the position of the peaks E2 (low) and E2 (high) are comparable to the values for bulk crystals of ZnO [27]. 400 E 2 (high) E 2 (low) The high crystal quality of NRs was also confirmed by the Raman and CL spectra. The Raman spectra demonstrate the high crystal quality of the sample wurtzite structure ( Figure 3). The low-frequency E2 (low) mode is mainly associated with nonpolar oscillations of the heavier Zn sublattice, whereas the high-frequency E2 (high) mode is mainly associated with the displacement of lighter oxygen atoms. The modes A1 and E1 are split into longitudinal (LO) and transverse (TO) optical components. With the exception of LO modes, all Raman active phonon modes are clearly identified in the measured spectrum. The full width at half maximum and the position of the peaks E2 (low) and E2 (high) are comparable to the values for bulk crystals of ZnO [27]. The CL studies of nanorod arrays showed that the spectrum consisted of two emission bands: near-bandedge (NBE) UV and green (2.4-2.5 eV), while the intensity of the first, as a rule, significantly exceeded the intensity of the second one. Despite numerical studies, the origin of green emission band remained divisive [28], although in the structures, which are not intentionally doped, it was usually associated with intrinsic point defects, in particular with oxygen vacancies. The low green band intensity indicates a high crystalline perfection of nanorods grown. Moreover, these means that the nanorods are stoichiometric or, as might be expected, are zinc rich. It was observed that the form and size of nanorods can vary depending on the growth conditions. A variation of nanorod size and form led to a change of NBE band form and of green band intensity. As an example, the UV CL spectra measured in the temperature range from 90 to 300 K on an The CL studies of nanorod arrays showed that the spectrum consisted of two emission bands: near-bandedge (NBE) UV and green (2.4-2.5 eV), while the intensity of the first, as a rule, significantly exceeded the intensity of the second one. Despite numerical studies, the origin of green emission band remained divisive [28], although in the structures, which are not intentionally doped, it was usually associated with intrinsic point defects, in particular with oxygen vacancies. The low green band intensity indicates a high crystalline perfection of nanorods grown. Moreover, these means that the nanorods are stoichiometric or, as might be expected, are zinc rich. It was observed that the form and size of nanorods can vary depending on the growth conditions. A variation of nanorod size and form led to a change of NBE band form and of green band intensity. As an example, the UV CL spectra measured in the temperature range from 90 to 300 K on an individual nanorod and on nanorods grown together in the form of a comb are shown in Figures 4 and 5. It is seen that at low temperature the excitonic lines are well resolved, however, their relative intensities depend on the nanorod form. This can be explained under an assumption that some excitonic modes fall into resonance inside the nanorods. At room temperature the excitonic modes are not resolved, nevertheless, it should be noted that the emission maximum for the individual nanorod is shifted to the higher energy by about 30 meV according the spectrum of the comb-like structure. It should be noted that the similar dependence on the nanorod size and form was observed in many papers; however, its cause has not yet been conclusively established [28][29][30]. however, their relative intensities depend on the nanorod form. This can be explained under an assumption that some excitonic modes fall into resonance inside the nanorods. At room temperature the excitonic modes are not resolved, nevertheless, it should be noted that the emission maximum for the individual nanorod is shifted to the higher energy by about 30 meV according the spectrum of the comb-like structure. It should be noted that the similar dependence on the nanorod size and form was observed in many papers; however, its cause has not yet been conclusively established [28][29][30]. To study the electrical and photoelectrical properties of single nanorods they were shaken off in an ultrasonic bath on a substrate with several preprepared In contacts and then places, in which the contacts were connected by only one nanorod, were found in a SEM ( Figure 6). V-A curves measured on such nanorod in dark and under UV illumination are shown in Figure 7. It is seen that both dependences are practically linear, thus the contacts are practically ohmic or their resistance is lower than that of NR that correlates with [31]. The dark resistance is about 6.5 × 10 8 Ohm, that allows to estimate the resistivity as 1.3 × 10 4 Ohm×cm. If the electron mobility is assumed of 500 cm 2 /(V×s), the dopant concentration can be estimated as about 10 12 cm −3 . This value should be considered as the lower limit for the concentration because the contact resistance and a possible decrease of effective NR cross-section due to near surface band bending could increase the apparent resistance of NRs. Under UV illumination the resistance decreases by about 65 times, i.e., the detector sensitivity equals to Iph/Idark, where Iph and Idark are the photo-and dark current values, respectively, is of 65. The estimation of the responsivity shows that at a bias of 0.2 V it exceeds 500 A/W. Such high sensitivity and responsivity values can be expected for NRs because it is widely accepted that the photocurrent gain in short-length photodetectors is proportional to τ/tr, where τ and tr are the excess carrier lifetime and the carrier transit time, respectively [32], therefore it can reach high values in micron size structures. However, in spite of very high sensitivity and responsivity values photodetectors based on single NRs are unlikely to find wide practical application in the near future. Due to the complexity of fabricating structures based on single NRs, UV sensors based on arrays of ZnO NRs seems to be more promising for these purposes. To study the electrical and photoelectrical properties of single nanorods they were shaken off in an ultrasonic bath on a substrate with several preprepared In contacts and then places, in which the contacts were connected by only one nanorod, were found in a SEM ( Figure 6). V-A curves measured on such nanorod in dark and under UV illumination are shown in Figure 7. It is seen that both dependences are practically linear, thus the contacts are practically ohmic or their resistance is lower than that of NR that correlates with [31]. The dark resistance is about 6.5 × 10 8 Ohm, that allows to estimate the resistivity as 1.3 × 10 4 Ohm×cm. If the electron mobility is assumed of 500 cm 2 /(V×s), the dopant concentration can be estimated as about 10 12 cm −3 . This value should be considered as the lower limit for the concentration because the contact resistance and a possible decrease of effective NR cross-section due to near surface band bending could increase the apparent resistance of NRs. Under UV illumination the resistance decreases by about 65 times, i.e., the detector sensitivity equals to I ph /I dark , where I ph and I dark are the photo-and dark current values, respectively, is of 65. The estimation of the responsivity shows that at a bias of 0.2 V it exceeds 500 A/W. Such high sensitivity and responsivity values can be expected for NRs because it is widely accepted that the photocurrent gain in short-length photodetectors is proportional to τ/t r , where τ and t r are the excess carrier lifetime and the carrier transit time, respectively [32], therefore it can reach high values in micron size structures. However, in spite of very high sensitivity and responsivity values photodetectors based on single NRs are unlikely to find wide practical application in the near future. Due to the complexity of fabricating structures based on single NRs, UV sensors based on arrays of ZnO NRs seems to be more promising for these purposes. The transient photocurrent response of NR array under UV illumination pulses is shown in Figure 8. In this Figure the sensitivity is presented. It is seen that the sensitivity of as-grown arrays is about 1, i.e., under UV illumination current increases about two times. However, it should be noted that the structure of detector studied is not optimal because a lot of illuminated NRs have not contact with metal and NRs under metal contacts are illuminated by a reflected light. Therefore, the sensitivity can be essentially increased if transparent contacts were used. The rise and decay rates of photocurrent are rather low that well correlates with other studies of nanorod-based photodetectors [33][34][35][36]. Such slow response cannot be explained by the excess carrier recombination because lifetime in ZnO nanorods was measured in the ns range [28]. Usually such slow response was explained under the assumption that oxygen molecules are adsorbed on the nanorod surface in the dark as negatively charged ions by capturing electrons from the n-type ZnO, thereby creating a depletion layer near the nanorod surface [30,[33][34][35][36]. Such depletion layer decreases the effective nanorod cross-section increasing their resistance. Under UV irradiation, electron-hole pairs are generated, and the holes migrate to the surface and compensate the surface charge reducing the depletion layer thickness. Moreover, the holes interact with oxygen ions to form neutral O2 molecules, which are desorbed from the surface. This process also leads to the electron concentration increase The transient photocurrent response of NR array under UV illumination pulses is shown in Figure 8. In this Figure the sensitivity is presented. It is seen that the sensitivity of as-grown arrays is about 1, i.e., under UV illumination current increases about two times. However, it should be noted that the structure of detector studied is not optimal because a lot of illuminated NRs have not contact with metal and NRs under metal contacts are illuminated by a reflected light. Therefore, the sensitivity can be essentially increased if transparent contacts were used. The rise and decay rates of photocurrent are rather low that well correlates with other studies of nanorod-based photodetectors [33][34][35][36]. Such slow response cannot be explained by the excess carrier recombination because lifetime in ZnO nanorods was measured in the ns range [28]. Usually such slow response was explained under the assumption that oxygen molecules are adsorbed on the nanorod surface in the dark as negatively charged ions by capturing electrons from the n-type ZnO, thereby creating a depletion layer near the nanorod surface [30,[33][34][35][36]. Such depletion layer decreases the effective nanorod cross-section increasing their resistance. Under UV irradiation, electron-hole pairs are generated, and the holes migrate to the surface and compensate the surface charge reducing the depletion layer thickness. Moreover, the holes interact with oxygen ions to form neutral O 2 molecules, which are desorbed from the surface. This process also leads to the electron concentration increase and a decrease of the depletion layer thickness [37,38]. Thus, the photocurrent increases due to both an increase of carrier concentration and an increase of effective cross-section. pressing the depleted region and increasing the photocurrent gain [40,41]. Thus, such annealing can be effectively used to control the photoelectrical properties of NRs. It should be noted that, although the measured scheme used in this work has been already applied (see, e.g., [39]), it is far from optimal. The contacts are attached only to a part of the NRs, which are screened from direct light and can be excited only by scattered light. Although, photogenerated e-h pairs inside the NRs can be separated and they can act as a gate for the seed polycrystalline ZnO layer improving the sensitivity. Thus, it seems that the measured values of the sensitivity and responsivity can be essentially improved. Conclusions Thus, the photoelectric properties of high quality individual ZnO nanorods and nanorod arrays grown by the self-catalytic CVD method have been studied. The high quality of nanorods is confirmed by the X-ray diffraction, Raman spectroscopy and cathodoluminescence investigations. The nanorod resistivity and carrier concentration is estimated. The transient photocurrent response of both as grown and annealed at 550 °C nanorod array under UV illumination pulses is studied. It is shown that annealing increases the sensitivity and decreases the responsivity that is explained by oxygen out-diffusion and the formation of near surface layer enriched with oxygen vacancies. As shown in [34], post annealing can improve the performance of ZnO NR photodetectors due to a decrease of defect concentration. Annealing of NR arrays at 550 • C was observed to decrease both I ph and I dark , however, I dark decrease was more pronounced, thus, the sensitivity increases with the annealing duration and exceeds 300 after 3 h annealing ( Figure 8). The responsivity decreases by annealing from about 2.3 to 3 × 10 −2 and 1.25 × 10 −2 A/W after 1 and 3 h annealing, respectively. CL investigations showed that such annealing led to the essential increase of green band intensity, which is frequently associated with oxygen vacancies in near surface layers [28,39] (Figure 9). Thus, it can be assumed that oxygen out-diffusion takes place at such annealing and the near surface layers of NRs were enriched with oxygen vacancies. These near surface vacancies capture electrons forming high resistivity layer and wide depletion region near the surface. It is the reason for an essential decrease of apparent cross-section and in turn for the decrease of I dark . Accordingly, illumination can lead to the ionization of oxygen vacancies suppressing the depleted region and increasing the photocurrent gain [40,41]. Thus, such annealing can be effectively used to control the photoelectrical properties of NRs. pressing the depleted region and increasing the photocurrent gain [40,41]. Thus, such annealing can be effectively used to control the photoelectrical properties of NRs. It should be noted that, although the measured scheme used in this work has been already applied (see, e.g., [39]), it is far from optimal. The contacts are attached only to a part of the NRs, which are screened from direct light and can be excited only by scattered light. Although, photogenerated e-h pairs inside the NRs can be separated and they can act as a gate for the seed polycrystalline ZnO layer improving the sensitivity. Thus, it seems that the measured values of the sensitivity and responsivity can be essentially improved. Conclusions Thus, the photoelectric properties of high quality individual ZnO nanorods and nanorod arrays grown by the self-catalytic CVD method have been studied. The high quality of nanorods is confirmed by the X-ray diffraction, Raman spectroscopy and cathodoluminescence investigations. The nanorod resistivity and carrier concentration is It should be noted that, although the measured scheme used in this work has been already applied (see, e.g., [39]), it is far from optimal. The contacts are attached only to a part of the NRs, which are screened from direct light and can be excited only by scattered light. Although, photogenerated e-h pairs inside the NRs can be separated and they can act as a gate for the seed polycrystalline ZnO layer improving the sensitivity. Thus, it seems that the measured values of the sensitivity and responsivity can be essentially improved. Conclusions Thus, the photoelectric properties of high quality individual ZnO nanorods and nanorod arrays grown by the self-catalytic CVD method have been studied. The high quality of nanorods is confirmed by the X-ray diffraction, Raman spectroscopy and cathodoluminescence investigations. The nanorod resistivity and carrier concentration is estimated. The transient photocurrent response of both as grown and annealed at 550 • C nanorod array under UV illumination pulses is studied. It is shown that annealing increases the sensitivity and decreases the responsivity that is explained by oxygen out-diffusion and the formation of near surface layer enriched with oxygen vacancies. Oxygen vacancy formation due to annealing is confirmed by an increase of green emission band intensity.
2021-09-01T15:12:42.981Z
2021-06-22T00:00:00.000
{ "year": 2021, "sha1": "e8a4e00e1c043ef4ff0cbf1ecef086c5dc3991db", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/14/13/3750/pdf?version=1624421335", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "a823a01499032ccc2a41c722ec4407724689266b", "s2fieldsofstudy": [ "Materials Science", "Engineering", "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
2780899
pes2o/s2orc
v3-fos-license
Association between skin diseases and severe bacterial infections in children: case-control study Background Sepsis or bacteraemia, however rare, is a significant cause of high mortality and serious complications in children. In previous studies skin disease or skin infections were reported as risk factor. We hypothesize that children with sepsis or bacteraemia more often presented with skin diseases to the general practitioner (GP) than other children. If our hypothesis is true the GP could reduce the risk of sepsis or bacteraemia by managing skin diseases appropriately. Methods We performed a case-control study using data of children aged 0–17 years of the second Dutch national survey of general practice (2001) and the National Medical Registration of all hospital admissions in the Netherlands. Cases were defined as children who were hospitalized for sepsis or bacteraemia. We selected two control groups by matching each case with six controls. The first control group was randomly selected from the GP patient lists irrespective of hospital admission and GP consultation. The second control group was randomly sampled from those children who were hospitalized for other reasons than sepsis or bacteraemia. We calculated odds ratios and 95% confidence intervals (CI). A two-sided p-value less than 0.05 was considered significant in all tests. Results We found odds ratios for skin related GP consultations of 3.4 (95% CI: [1.1–10.8], p = 0.03) in cases versus GP controls and 1.4 (95% CI: [0.5–3.9], p = 0.44) in cases versus hospital controls. Children younger than three months had an odds ratio (cases/GP controls) of 9.2 (95% CI: [0.81–106.1], p = 0.07) and 4.0 (95% CI: [0.67–23.9], p = 0.12) among cases versus hospital controls. Although cases consulted the GP more often with skin diseases than their controls, the probability of a GP consultation for skin disease was only 5% among cases. Conclusion There is evidence that children who were admitted due to sepsis or bacteraemia consulted the GP more often for skin diseases than other children, but the differences are not clinically relevant indicating that there is little opportunity for GPs to reduce the risk of sepsis and/or bacteraemia considerably by managing skin diseases appropriately. Background Sepsis or bacteraemia requiring hospital admission is rare, however it is a significant cause of high mortality and serious complications such as septic shock and multi organ dysfunction syndrome [1][2][3]. Currently, little data is available about the causal factors of sepsis or bacteraemia in children in the population. The available studies in this field deal particularly with adults or with children belonging to high-risk groups such as neonates and those who are immunocompromized due to HIV infection and children with underlying malignancies [4][5][6][7]. The few studies which have been performed on sepsis or bacteraemia in children from the general population are case series [8][9][10] or deal with specific causative bacterial agents [1,[11][12][13]. Three previous studies of which only one performed in children reported that from the identifiable primary focus in patients with sepsis or bacteraemia most often (22-37%) an infection of the skin was detected [1,2,12]. Children suffering from atopic dermatitis are chronic carriers of Staphylococcus Aureus and run therefore a higher risk to develop sepsis or bacteraemia [9,14]. Skin infections are almost always curable, but some may lead to serious complications such as nephritis, carditis, arthritis and sepsis if the diagnosis is delayed and/or treatment is inadequate [15]. A Dutch study performed in children aged 0-14 years reported that 28% of those with skin diseases consulted the general practitioner (GP) [16]. Hence, for this reason, we hypothesize that children who were admitted to hospital due to sepsis or bacteraemia suffered more often from skin diseases, especially skin infections, and therefore visited their GP for this reason more often prior to their admission compared to their controls. If our hypothesis is true and given the fact that skin diseases account for 23% of the total morbidity in children in general practice [17], the GP may be able to reduce the risk of sepsis or bacteraemia by recognizing skin diseases in time and treating them adequately. To test this hypothesis we performed a case-control study, aiming to answer the following research question: -Did children who were admitted to a hospital for sepsis or bacteraemia visit their GP more often for skin diseases before their admission, compared to matched controls? Methods We used data of the second Dutch National Survey of general practice performed by NIVEL (Netherlands Institute for Health Services Research) in 2001 and data of the LMR (National Medical Registration in the Netherlands). Second Dutch National Survey In the Netherlands, general practices have a fixed list size and all inhabitants are listed with a general practice, and GPs have a gate-keeping role. Usually, the first contact with health care, in a broad sense, is the contact with the general practitioner. This survey included a representative sample of the Dutch population. Data about all physician-patient contacts, prescriptions and referrals during 12 months in 2001 were extracted from electronic medical records of all listed patients of 104 practices (195 GPs) [18]. All diagnoses were coded using the International Classification of Primary Care (ICPC) [19]. Different health problems within one consultation were recorded separately. Socio-demographic characteristics such as age, gender, region and urbanization level of all patients listed to the participating GPs were derived from the GP's computerized patient file. The degree of urbanization was derived from the general practice's postal code and categorized into four classes 'under 30,000 inhabitants', '30,000-50,000 inhabitants', 'over 50,000 inhabitants' and 'the three large Dutch cities Amsterdam, Rotterdam and The Hague'. The Netherlands were divided into a Northern, Central and Southern region. Childrens' socioeconomic status (SES) and ethnic origin were obtained by a questionnaire filled out by parents or by the children themselves if they were older than 12 years (response rate 76%). SES was based on the father's occupation, which was categorized into five classes "non-manual work high (class I)", "non-manual work middle (class II)", "nonmanual low and farmers (class III)", "manual work high/ middle (class IV)" and "manual work low (class V)". Ethnicity was derived from the country of birth of either parent. If either parent was born in Turkey, Africa, Asia (except Japan and Indonesia) and Central or South America, their children were considered to be children of non-Western origin (in accordance with the classification of Statistics Netherlands). All other children were defined as Western. Eight practices were excluded from analysis because of insufficient quality of data registration. LMR (National Medical Registration in the Netherlands) This continuous registration contains information about hospital admissions, diagnostic and therapeutic interventions of all hospitals in the Netherlands. All diagnoses were coded using the International Classification of Diseases 9 th revision (ICD-9) [20]. Previous research revealed that about 87% of the patients referred by the GP to a specialist can be linked to a record of the hospital register [21]. Cases and controls Cases were defined as being diagnosed with sepsis or bacteraemia at discharge. The corresponding ICD-9 codes for sepsis and bacteraemia are listed in a separate table [see Additional file 1]. Cases were only selected when their admission date was at least 14 days after the start and before the end of the one-year registration period of the survey in general practice. If cases had more than one admission within a week concerning the same health problem only the first admission was selected. We excluded all children who were primarily admitted to a hospital for skin diseases (N = 29), but assessed GP consultations of these children 14 days prior to their hospital admission. We selected two control groups by matching each case with six controls. Cases and controls were matched on age group (table 1), gender and region. The first control group was randomly selected from the GP patient lists irrespective of hospital admission and GP consultation, the so called GP controls. The second control group was composed by drawing a random sample from those children who were admitted to a hospital for other reasons than sepsis or bacteraemia, the so called hospital controls. This second control group was added because we can not rule out that some of our severely ill cases bypassed the general practitioner prior to their hospital admission which might lead to an under-estimation of contacts with the GP in this group. Ethical approval The study was carried out according to Dutch legislation on privacy. The privacy regulation of the study was approved by the Dutch Data Protection Authority. According to Dutch legislation, obtaining informed consent is not obligatory for observational studies. Data-analysis We analyzed data of all children aged 0-17 years and assessed whether a higher proportion of cases visited the GP with any disease, especially skin disease as listed in the S-chapter of the ICPC [see Additional file 2], within 14 days prior to their admission than controls (GP controls and hospital controls). We calculated odds ratios for the presence of GP consultations for all diseases, skin diseases and other diseases than skin diseases (cases/controls) and 95% confidence intervals (CI) using a conditional logistic regression model. We performed the same analysis for skin diseases within 30 days prior to the hospital admission of the cases. We repeated the latter analysis in a more These cases were explicitly defined as being admitted to hospital due to sepsis, meningitis, acute osteomyelitis, acute pyelonefritis, acute mastoiditis, infectious arthritis or pneumonia. A two-sided p-value less than 0.05 was considered significant in all tests. Study population The total general practice population included 88,307 children aged 0-17 years. We found 101 cases that could be matched with 597 GP controls and 583 hospital controls. Table 1 shows the baseline characteristics of cases and both control groups. Cases were comparable to their controls regarding socio-demographic characteristics. GP consultations Sixty eight cases (67%) consulted the GP 161 times within 14 days prior to their hospital admission; five cases (5%) consulted the GP for a skin disease. Among the GP controls 67 consultations were made by 53 (9%) children within 14 days prior to the admission of the case they were linked to; nine controls (1.5%) consulted the GP for a skin disease. In the same period 255 (43.7%) children among the hospital controls consulted their GP 477 times; of these children 20 (3.4%) presented a skin disease. Table 2 shows which skin diseases were presented to the GP by cases and controls. Children who were primarily admitted to hospital for a skin disease (N = 29) and excluded from analysis had the following diagnosis at discharge: skin abscesses, cellulitis, erysipelas, impetigo, infected finger/toe, paronychia and local skin infections. Of these children 14 (48%) consulted the GP 28 times within 14 days prior to their hospital admission. Eight children (28%) consulted the GP for a skin disease. 1 = International Classification of Primary Care 2 = control group randomly selected from the general practitioners' (GP) patient lists irrespective of hospital admission and GP consultation 3 = control group randomly sampled from those children who were hospitalized for other reasons than sepsis or bacteraemia restricted to the most severe cases (N = 44) and their controls. Discussion We tested the null hypothesis that there is no difference between children admitted for sepsis or bacteraemia and other children as to consulting a GP for skin diseases in a period of 14 days before admission to hospital. We found that there is an association between skin diseases presented to the GP and subsequent hospitalization for sepsis or bacteraemia among GP controls but not for hospital controls. We performed the same analysis in cases and controls younger than three months and found an even stronger relationship, though not significant. This lack of significance is probably due to the small number of cases in this age group. From a clinical point of view the difference between cases and controls may not be very relevant. The probability that a case consulted the GP for skin diseases prior to their hospital admission is only about 5% and therefore not a point of departure for GPs to reduce the risk of sepsis and/ or bacteraemia considerably by diagnosing and treating skin diseases appropriately. However, considering cases younger than 3 months (N = 9) about 22% consulted the GP for skin diseases prior to their hospital admission which means that GPs may have possibilities in this age group to reduce the risk of sepsis and/or bacteraemia considerably by diagnosing and treating skin diseases appropriately. We recommend replication of our study in a larger dataset for this age group. Compared with both control groups our cases visited the GP about two times as high with both infectious skin diseases and atopic skin diseases as well, which could support the association between sepsis or bacteremia and infectious and atopic skin diseases [1,2,9,12,14]. In all age groups we found odds ratios concerning GP consultations for other diseases than skin diseases that are considerably high and significantly different (p < 0.0001) compared to the odds ratios for skin diseases. This finding indicates that there is a very strong association between GP consultations for other diseases than skin diseases, 14 days prior to hospital admission, and being hospitalized for sepsis or bacteraemia. These two large and representative datasets enabled us to assess accurately odds ratios among cases and their matched controls and to test our hypothesis. By matching our cases and controls on age, gender and region we adjusted for differences concerning these variables and also for other socio-demographic characteristics (table 1). To limit the seasonal variation of the GP consultations we selected only the consultations that took place within 14 days prior to the admission date of the case to whom the controls were linked to. Overall the odds ratio for a GP consultation concerning skin diseases among cases versus GP controls 14 days prior to the admission of the cases is higher compared to the odds ratio among cases versus hospital controls. Our findings are in accordance with an earlier finding by Infante-Rivard [22] that inferences of severe childhood diseases using hospital controls in comparison with population controls resulted in odds ratios closer to the null value. Conclusion There is evidence that children who were admitted due to sepsis or bacteraemia consulted the GP more often for skin diseases prior to their admission, than other children, but the differences are not clinically relevant which means that there is little opportunity for GPs to reduce the risk of sepsis and/or bacteraemia considerably by diagnosing and treating skin diseases appropriately.
2018-05-30T04:57:45.941Z
2006-08-31T00:00:00.000
{ "year": 2006, "sha1": "a9d8542cbfa2c91d8b1da5e2a0befc7f4d0ec982", "oa_license": "CCBY", "oa_url": "https://bmcprimcare.biomedcentral.com/track/pdf/10.1186/1471-2296-7-52", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "40874b1de5d7692f6b5787b97c60bca233e057ea", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
231885290
pes2o/s2orc
v3-fos-license
LSD’s effects are differentially modulated in arrestin knockout mice Recent evidence suggests that psychedelic drugs can exert beneficial effects on anxiety, depression, and ethanol and nicotine abuse in humans. However, the hallucinogenic side-effects of psychedelics often preclude their clinical use. Lysergic acid diethylamide (LSD) is a prototypical hallucinogen and its psychedelic actions are exerted through the 5-HT2A serotonin receptor (5-HT2AR). 5-HT2AR activation stimulates Gq- and β-arrestin-(βArr) mediated signaling. To separate effects of these signaling modes, we have used βArr1 and βArr2 mice. We find that LSD stimulates motor activities to similar extents in WT and βArr1-KO mice, with non-significant effects in βArr2-KOs. LSD robustly stimulates many surrogates of psychedelic drug actions including head twitches, grooming, retrograde walking, and nose poking in WT and βArr1-KO animals. In contrast, LSD only slightly stimulates head twitches in βArr2-KO mice, without effects on retrograde walking or nose poking. The 5-HT2AR antagonist MDL100907 (MDL) blocks these LSD effects. LSD also disrupts prepulse inhibition (PPI) in WT and βArr1-KOs; PPI is unaffected in βArr2-KOs. MDL restores PPI in WT mice, but this antagonist is without effect and haloperidol is required in βArr1-KOs. LSD produces a biphasic body-temperature response in WT mice, a monophasic response in βArr1-KOs, and is without effect in βArr2 mutants. Both MDL and the 5-HT1AR antagonist, WAY 100635 (WAY), block the effects of LSD on body temperatures in WT mice, whereas WAY is effective in βArr1-KOs. Collectively, these results reveal that LSD produces diverse behavioral effects through βArr1 and βArr2, and that LSD’s psychedelic drug-like actions appear to require βArr2. INTRODUCTION Lysergic acid diethylamide (LSD) is a prototypical psychedelic drug and is recognized as one of the most potent drugs in this class [1]. LSD was synthesized by Albert Hofmann in 1938, who later discovered its hallucinogenic properties [1][2]. LSD alters sensation, perception, thought, mood, sense of time and space, and consciousness of self in humans [1,3]. Since LSD-induced states appear to bear many similarities to early acute phases of psychosis and because serotonin (5-HT) and LSD both contain an indolamine moiety, Woolley and Shaw [4] proposed that aberrant 5-HT levels in brain could produce mental disturbances that included psychosis. This suggestion gave rise to the 5-HT hypothesis for schizophrenia and stimulated researchers to examine LSD responses in the hopes they could gain a better understanding of the basis for schizophrenia. However, this research was largely curtailed when LSD was classified as a DEA Schedule I drug in the 1960's. Recent research has revealed that LSD has medicinal value in treating cluster headaches [5], anxiety and depressive disorders in life-threatening conditions when combined with psychotherapy [6], and that it may have potential for studying human consciousness and substance abuse [7][8]. Besides 5-HT receptors, LSD activates a number of other biogenic amine GPCRs [9] and this polypharmacy may contribute to LSD's wide ranging activities. One activity in particular regarding LSD has been its hallucinogenic actions. This activity is ascribed to stimulation of the 5-HT 2A receptor (5-HT2AR) since in drug discrimination studies, discrimination potency is highly correlated with hallucinogenic potency in humans [12]. The same psychedelics produce head twitches in mice and, as a result, this response has been proposed as a proxy for hallucinations in humans [13], even though non-psychedelic drugs like 5-hydroxytryptophan induce robust headtwitch responses [14]. Hallucinogen-induced head twitches in rodents are blocked by the highly selective 5-HT2AR antagonist MDL100907 [15][16][17] and are absent in htr2A knockout (KO) mice 4 [18][19]. In addition, recent human studies have shown the hallucinogenic actions of LSD are blocked with the 5-HT2AR preferring antagonist ketanserin [20]. Thus, the hallucinogenic effects of LSD appear to be mediated through the 5-HT2AR [21]. The 5-HT2AR is a member of the rhodopsin family of GPCRs that is coupled to Gq protein and β-arrestin (βArr) mediated signaling [22][23][24][25]. Recent experiments have found the 5-HT2AR preferentially activates Gq family members, with moderate activity at Gz, and minimal detectible activity at other Gi, G12/13, and Gs-family members [26]. However, the 5-HT2AR binds to both βArr1 and βArr2 proteins in vitro and is complexed with bArr1 and bArr2 in cortical neurons in vivo [25]. While most GPCR agonists, like 5-HT, activate both G protein and βArr modes of signaling, it has become established that ligand binding can activate the G protein-dependent signaling pathway while serving to activate or inhibit a G protein-independent pathway through βArr. Hence, a given ligand can act as an agonist at one pathway while inhibiting the other pathway or it can serve various combinations of these actions. This property has been termed functional selectivity or biased signaling [27][28][29] and ligands have been developed to exploit these signaling features [c.f., 30]. Although LSD activates G protein signaling at many GPCRs [11], this psychedelic stimulates βArr-mediated responses at most tested biogenic amine GPCRs [9]. Subjects Adult male and female WT and βArr1-KO, and WT and βArr2-KO mice were used in these experiments [1][2]. All studies were conducted with an approved protocol from the Duke University Institutional Animal Care and Use Committee. Open field activity Motor activities were assessed over 120 min in an open field (Omnitech Electronics, Columbus, OH) illuminated at 180 lux [3]. Head twitch, grooming, and retrograde walking These behaviors were filmed during the assessment of motor activity in the open field. Prepulse inhibition (PPI) PPI of the acoustic startle response was conducted using SR-LAB chambers (San Diego Instruments, San Diego, CA) as reported [3]. Regulation of body temperature Baseline body temperatures were taken (Physitemp Instruments LLC, Clifton, NY) in the absence of the vehicle or antagonist. Immediately afterwards mice were injected with vehicle or different 6 doses of MDL or WAY. Fifteen min later (time 0) their body temperatures were taken and they were administered vehicle or LSD. Subsequently, body temperatures were taken at 15, 30, 60, 120, 180, and 240 min. Statistics All statistical analyses were performed with IBM SPSS Statistics 27 programs (IBM, Chicago, IL). Effects of Arrb1 or Arrb2 deletion on LSD-stimulated motor responses LSD stimulates, inhibits, produces biphasic effects on motor activities in rats [18,[37][38][39][40][41][42][43]. The βArr1 and βArr2 mice were used to determine whether disruption of either of these gene products could modify responses to LSD and to test whether 5-HT2AR antagonism differentially could block the effects of LSD. Locomotor, rearing, and stereotypical activities were monitored in the βArr1 and βArr2 mice at 5-min intervals over the 120 min test with the βArr1 and βArr2 mice (Supplementary Figs. S1-S2; Supplementary Tables S1-S2). When cumulative baseline locomotion was examined in βArr1 mice, activity was not differentiated by genotype or the preassigned treatment condition (Supplementary Table S3; Supplementary Table S4). By contrast, overall cumulative baseline rearing activities were lower in groups that were to receive 0.1 or 0.5 mg/kg MDL with LSD compared to mice that were to be given LSD or the vehicle (p-values≤0.016). For stereotypy in βArr1 animals, overall cumulative baseline activities were lower in groups that were to receive 0.1 or 0.5 MDL with LSD than the vehicle control (p-values≤0.005); stereotypy in the group to be injected with 0.1 mg/kg MDL with LSD was lower also than the group to be given LSD (p=0.002). In comparison to the in βArr1 mice, baseline cumulative locomotor, rearing, and stereotypical activities in the βArr2 animals were not distinguished by genotype or treatment (Supplementary Table S5; Supplementary Table S6) When cumulative LSD-stimulated motor activities were examined in the βArr1 mice, no genotype effects were found. This psychedelic stimulated locomotion relative to the groups given 0.5 mg/kg MDL alone or the vehicle (p-values≤0.001) ( Fig. 1a; Supplementary Table S4). Importantly, both 0.1 and 0.5 mg/kg MDL blocked the locomotor-stimulating effects of LSD (p-values≤0.001). When cumulative rearing activities were examined, these responses were found to be lower in groups given 0.5 mg/kg MDL alone or administered 0.1 or 0.5 mg/kg MDL with LSD (p-values≤0.028) (Fig. 1c). An assessment of cumulative stereotypical activities in the βArr1 animals revealed responses in the MDL group, as well as in the 0.1 and 0.5 mg/kg MDL plus LSD groups to be significantly decreased relative to the LSD-treated mice (p-values≤0.027) (Fig. 1e). Hence, MDL normalized the LSD-induced hyperactivity in the βArr1 animals. Effects of LSD in the βArr2-KO mice were quite different from those of its WT controls (Supplementary Table S6). LSD was more potent in stimulating cumulative locomotor and rearing activities in the WT than in βArr2-KO mice (p-values<0.001); Fig. 1b,d); no genotype differences were observed for cumulative stereotypical activities (Fig. 1f). When locomotion was analyzed for WT animals, the LSD-stimulated responses were higher than those for all other treatment groups (p-values<0.001) (Fig. 1b). Hence, all three doses of MDL were efficacious in suppressing the LSD-induced hyperlocomotion. Although LSD increased locomotor activity in βArr2-KO mice, it was not significantly different from any other treatment group. In WT mice rearing activities were increased with LSD over that for the MDL and vehicle controls (p-values<0.001) (Fig. 1d). Although rearing was higher in WT mice given 0.05 mg/kg MDL with LSD than in the MDL and vehicle controls (p-values≤0.029), the 0.1 and 0.5 mg/kg doses of MDL reduced the LSDstimulated rearing activity (p-values≤0.001) to the levels of these controls. By comparison, LSD was without effect in the βArr2-KO mice. An examination of stereotypy failed to detect any significant genotype effects (Fig. 1f). Nonetheless, treatment effects were evident overall with LSD stimulating stereotypical activities over that of the vehicle and MDL controls (p-values≤0.013); however, when 0.5 mg/kg MDL was given with LSD the psychedelic effects were abrogated (p=0.003). Collectively, these results indicate that LSD stimulates motor responses to similar extents in the βArr1 and βArr2 WT mice and in βArr1-KO animals, and that the 5-HT2AR antagonist MDL blocks these LSD-stimulated activities. By striking comparison, LSD exerts minimal effects on these same responses in the βArr2-KO mice where none of these activities were significantly increased above that of the vehicle or MDL controls. In contradistinction to the βArr1 mice, genotype differences were found between the βArr2 animals (Supplementary Table S8). The numbers of head twitches were significantly increased in the WT than in βArr2-KO mice treated with LSD alone or 0.05 mg/kg MDL with LSD (p-values<0.001) (Fig. 2b). In WT mice head twitches were augmented by LSD and by 0.05 or 0.1 mg/kg MDL plus LSD relative to the MDL and vehicle controls (p-values<0.001). While 0.05 mg/kg MDL was ineffective in reducing this LSD-mediated behavior, both the 0.1 and 0.5 mg/kg doses significantly suppressed this response (p-values≤0.002)with the higher MDL dose being the more efficacious (p<0.001). The LSD and 0.05 or 0.1 mg/kg MDL plus LSD treatments also increased head twitch behaviors in βArr2-KO animals compared to their MDL and vehicle controls and the 0.5 mg/kg MDL plus LSD treatment (p-values≤0.023). Only 0.5 mg/kg MDL was sufficient to depress this LSD-initiated response (p=0.019) to the levels of the controls. For grooming, the durations of grooming were higher in the WT than in the corresponding βArr2-KO groups administered LSD alone, 0.05 mg/kg MDL plus LSD, or 0.5 mg/kg MDL with Table S8). In WT mice LSD augmented grooming relative to the MDL and vehicle controls (p<0.001). While 0.05 mg/kg MDL failed to block the LSD effect, both the 0.1 and 0.5 mg/kg doses were efficacious (p-values<0.001) in returning responses to control levels. In βArr2-KO animals, LSD failed to increase the duration of grooming responses relative to the MDL and vehicle controls. Nevertheless, grooming was enhanced in the group administered 0.05 mg/kg MDL plus LSD relative to all groups (p-values≤0.013), except those given LSD alone. Aside from disturbing grooming, LSD also induced retrograde walking (Supplementary Table S8). Here, genotype effects were observed in mice that received LSD or 0.05 mg/kg MDL with LSD (p-values≤0.040) (Fig. 2f). In WT mice LSD potentiated the incidences of retrograde walking compared to the MDL and vehicle controls (p<0.001). Although 0.05 mg/kg MDL was ineffective in decreasing this LSD-stimulated behavior, both the 0.1 and 0.5 mg/kg doses suppressed this response (p<0.001). By contrast, LSD was without significant effect on retrograde walking in the βArr2-KO animals. Differential nose-poking responses were observed between the βArr2 mice ( Fig. 2h). Here, LSD stimulated nose-poking behaviors in WT mice relative to all other groups (p<0.001). All doses of MDL suppressed LSD-stimulated nose poking in WT mice (p-values≤0.007) to the MDL and vehicle controls. By comparison, nose-poking in βArr2-KO animals was not distinguished by treatment condition. Since LSD can induce a fragmentation of consciousness [3], we examined grooming in detail since it has a chained organization of responses in rodents [51]. Inspection of the videorecordings confirmed that all genotypes engaged in a normal sequence of grooming beginning 11 with the face, progressing down the body, and ending at the feet or tail (Movie 1). When LSD was administered, the sequence of grooming in the WT and βArr1-KO mice became abbreviated, non-sequential, and/or restricted to one area of the body (Movies 2-3). By sharp comparison, the grooming sequence was complete and rarely perturbed in the βArr2-KO animals (Movie 4). When MDL alone was injected, the organization of grooming was intact in the WT and βArr1-KO mice (Movie 5). With MDL the βArr2-KO animals might pause in the grooming bout and/or display twitching of neck and back muscles but they would finish the sequence (Movie 6). The patterns of grooming among the genotypes with MDL plus LSD were divergent. In WT mice, when MDL was given with LSD, the organization of grooming was restored (Movie 7). When the βArr1 mutants received the same treatment, they began the grooming sequence, engaged in focal grooming of a part of the body, and then completed the sequence (Movie 8). When this same drug combination was administered to βArr2-KO mice, they usually began the sequence appropriately, but at some mid-or later-point they would become focused on one area; however, they usually completed the grooming sequence (Movie 9). In summary, responses to LSD across these LSD-stimulated behaviors were usually similar between the βArr1 genotypes and MDL reduced these responses to levels of the controls. Importantly, the βArr2 mice responded quite differently than the other genotypes. Responses to LSD were significantly higher overall in WT than in βArr2-KO animals. LSD did not significantly increase grooming, retrograde walking, or nose poking behaviors in these mutants. Notably, LSD disrupted the sequences of grooming in WT mice from both strains and in the βArr1-KO mice; the βArr2-KO animals were unaffected. Nonetheless, divergent responses to MDL alone or MDL plus LSD were observed among the genotypes. LSD and MDL100907 effects on prepulse inhibition LSD disrupts PPI in both rats and humans [44,52] and the response can be restored with a 5-HT2AR antagonist [44,53]. βArr1 mice were pre-treated with the vehicle or with 0.1 or 0.5 mg/kg MDL. Subsequently, they were administered the vehicle or 0.3 mg/kg LSD and tested in PPI. No significant genotype or treatment effects were observed for null activity or in response to the 120 dB startle stimulus ( Supplementary Fig. S3a- Table S9). In contrast, LSD depressed PPI in both βArr1 genotypes (p-values≤0.002) relative to the MDL and vehicle controls (Fig. 3a). When WT mice were administered either dose of the 5-HT2AR antagonist with LSD, it restored PPI to control levels. In stark contrast, neither dose of the antagonist was effective in blocking the LSD effects in the βArr1-KO mice. Notably, PPI in these two groups was significantly lower than that in similarly treated WT animals (p-values≤0.018). Since haloperidol can normalize PPI in mouse models [33], we tested whether this antipsychotic drug could normalize the LSD-disrupted PPI in the βArr1-KO mice (Supplementary Table S10). Overall null activity was higher in the 0.1 mg/kg haloperidol plus LSD group than in mice treated with haloperidol alone or the vehicle control (p-values≤0.009) (Supplementary Fig. S2c). An assessment of startle activity revealed this activity was lower overall in the WT relative to the βArr1-KO mice (p=0.028) (Supplementary Fig. S2d). For PPI, overall responses were reduced in the βArr1-KO compared to the WT animals (p=0.008; Fig. 3b). Treatment effects were noted where LSD suppressed PPI relative to all other treatment conditions (p-values≤0.002). Hence, haloperidol normalized the LSD-disrupted PPI in both βArr1 genotypes. PPI responses in the βArr2 mice were also examined. Overall null activity was decreased in the combined 0.1 mg/kg MDL plus LSD group compared to the vehicle control and the LSD group (p-values≤0.003) (Supplementary Fig. S2e; Supplementary Table S11). No significant differences were detected for startle activity (Supplementary Fig. S2f). Nevertheless, striking genotype differences were evident for PPI (Fig. 3c). Here, responses to LSD and to the 0.05 MDL plus LSD treatment were reduced in WT relative to the βArr2-KO mice (p-values≤0.001). In WT animals, LSD suppressed PPI compared to the MDL and vehicle controls (p=0.001). The 0.05 mg/kg dose of MDL was insufficient to restore the LSD-disrupted PPI relative to the vehicle and MDL controls, whereas with 0.1 mg/kg MDL normalization occurred. By dramatic comparison, 13 LSD was completely without effect in the βArr2-KO mice. Collectively, these findings show that LSD disrupts PPI in both genotypes of the βArr1 mice and in the βArr2 WT animals. MDL restores PPI in both WT strains, whereby haloperidol is required to normalize it in βArr1-KO mice. By contrast, PPI in βArr2-KO mice is unaffected by this psychedelic. LSD and MDL100907 effects on temperature regulation LSD is reported to increase and to decrease body temperatures in rodents [54] and both the 5- Table S12). Under vehicle conditions, body temperatures were higher in WT mice at 0 and 120 to 240 min compared to the βArr1-KO animals (p-values≤0.011) (Fig. 4a-b). With MDL, temperatures in WT animals were lower at 0 min, but were higher at 180 and 240 min than in the mutants (p-values≤0.026). LSD produced a biphasic response in WT animals, but only a monophasic effect in mutants with genotype differences at 30, 120, 180, and 240 min (p-values≤0.046). Within WT animals, LSD reduced body temperatures at 15 and 30 min, but increased them at 60, 120, and 180 min relative to vehicle (p-values≤0.008). By comparison, in βArr1-KO animals LSD only enhanced temperatures at 60 through 240 min (p-values≤0.011). Although MDL decreased body temperatures in WT mice at 0, 15, and 30 min (p-values≤0.019), it exerted no effects on the mutants at these times. Since LSD produced such dramatic effects in body temperatures between the βArr1 genotypes, different doses of MDL were administered with LSD in an attempt to normalize their temperatures (Supplementary Table S12). No genotype differences were obtained when 0.25 mg/kg MDL was given with LSD ( Fig. 4c-d). Responses to 0.5 mg/kg MDL with LSD, however, appeared as higher temperatures in WT animals at 0, 30, 120, 180, and 240 min (p-values≤0.021) and lower temperatures at 30 min than in the mutants (p<0.001). In WT mice, 0.025 mg/kg MDL normalized their LSD-perturbed body temperatures (Fig. 4c). By comparison, this low dose was ineffective in βArr1-KO animals, whereas 0.5 mg/kg MDL was required to normalize their body temperatures to the vehicle control (Fig. 4d). Because body temperatures in βArr1-KO mice were not fully restored with the 5-HT2AR antagonist, we examined whether a 5-HT1AR antagonist could block LSD's effects (Supplementary Table S12). Temperature responses to 0.25 mg/kg WAY 100635 (WAY) were significantly higher in WT than mutants at 30 and 60 min (p-values≤0.012) (Fig. 4e-f). The LSDinduced responses in WT mice were lower at 30 min but higher from 120 to 240 min than in the βArr1-KO animals (p-values≤0.036). In WT mice, body temperatures were lower in WAY-treated mice than in those given LSD at 15 to 180 min (p-values≤0.052) (Fig. 4e). By comparison, temperatures in βArr1-KO mice given WAY were higher than the LSD group at 0 min (p=0.028), but were lower from 60 min to 180 min (p-values≤0.002) (Fig. 4f). Apart from these differences, body temperatures to WAY across time in βArr1-KO mice were different from those of the vehicle group only at 0 and 240 min (p-values≤0.046), whereas no significant differences across time were observed for WT mice. To determine whether WAY exerted any effects on LSD responses, 0.1 or 0.25 mg/kg WAY was administered with this psychedelic (Supplementary Table S12). Although in WT mice both doses of WAY normalized the LSD-induced changes in body temperatures to the vehicle control, 0.25 mg/kg WAY appeared to be the more efficacious (Fig. 4g). While there were some variabilities across time in mutants with the 0.1 and 0.25 mg/kg doses, their LSD-disturbed 15 temperatures were restored to levels that were statistically indistinguishable from the vehiclewith the exception of 0.1 mg/kg WAY at 120 min (p=0.011) (Fig. 4h). Together, 0.25 mg/kg MDL and 0.25 mg/kg WAY were most efficacious in WT mice, while 0.5 mg/kg MDL was partially effective at later times and 0.25 mg/kg WAY was most proficient in the mutants. Effects of LSD on temperature regulation were examined also in the βArr2 mice (Supplementary Table S13). Responses to the vehicle or 0.5 mg/kg MDL were undifferentiated by genotype (Fig. 5a-b). By comparison, the LSD-induced changes in body temperatures were lower at 15 min (p=0.001) and higher at 60, 120, and 180 min in WT than in the βArr2-KO mice (p-values≤0.019). In WT animals, LSD produced biphasic changes in temperatures at 30, 120, and 180 min relative to vehicle (p-values≤0.032) (Fig. 5a). By contrast, no significant LSD effects were observed in βArr2-KO mice at any time-point (Fig. 5b). Temperature responses to MDL in mutants, however, were decreased relative to vehicle at 15 and 60 min (p-values≤0.013). When 0.25 mg/kg MDL was given with LSD, no differential genotype effects were noted (Fig. 5c-d). Nonetheless, body temperatures in WT mice administered 0.5 mg/kg MDL plus LSD were higher than in the mutants (p-values≤0.032). This same dose also produced deviations from the vehicle control in WT animals at 15 and 30 min (p-values≤0.035) and in βArr2-KO mice at 15, 60, and 120 min (p-values≤0.035). Regardless, in WT animals 0.025 mg/kg MDL normalized the LSDinduced changes in body temperature across time (Fig. 5c). Effects of WAY were evaluated next in the βArr2 mice (Supplementary Table S13). Genotype differences in response to the 0.5 mg/kg WAY control were observed at 0, 60, and 180 min (p-values≤0.023) (Fig. 5e-f). Body temperatures were modulated by LSD in WT relative to βArr2-KO animals at 15, 60, 120, and 180 min (p-values≤0.017). Within WT mice, temperatures were enhanced with WAY at 0 min relative to the vehicle control (p=0.021) (Fig. 5e). In contrast, responses to WAY were not evident in βArr2-KO animals (Fig. 5f). When 0.25 or 0.5 mg/kg WAY was administered with LSD, genotype differences were found at 15, 30, and 240 min with the lower dose (p-values≤0.048) and at 240 min with the higher dose (p=0.024) (Fig. 5g-h). For WT mice, 0.25 or 0.5 mg/kg WAY restored the LSD-disturbed body temperatures to levels that were not significantly different from the vehicle control (Fig. 5g). In summary, the low doses of MDL and WAY were both efficacious in reverting the LSD-induced changes to normal in WT animals. By comparison, LSD was without effect in βArr2-KO mice. Effects of Arrb1 or Arrb2 deletion on 5-HT2AR expression Finally, we examined whether deletion of Arrb could alter 5-HT2AR expression by radioligand binding with brains from WT and βArr1-KO, and WT and βArr2-KO littermates. When [ 3 H]ketanserin affinity binding was examined, it was found to be significantly higher in βArr2-KO samples than in βArr1 WT and KO brains (p-values≤0.039) (Supplementary Fig. S4a). By contrast, the numbers of 5-HT2AR binding sites were higher in brain extracts from βArr2-KO mice than from βArr1 WT and KO animals and the βArr2 WT mice (p-values≤0.0.34); no significant differences were observed among the latter three genotypes (Supplementary Fig. S4b). We examined also 5-HT2AR immunofluorescence in βArr1 and βArr2 brain sections ( Supplementary Fig. S4c-f). Here, we detected no apparent alterations in the relative receptor distributions among the genotypes. Together, these results are consistent with the hypothesis that neither Arrb1 nor Arrb2 genetic deletion decreases 5-HT2A receptor expression. Nonetheless, Arrb2 disruption leads to increased expression of 5-HT2ARs in brain. DISCUSSION In the present study, we analyzed whether global deletion of Arrb1 or Arrb2 modifies responses to LSD in the mutants relative to their respective WT controls. In the open field 0.3 mg/kg LSD stimulated motor activities in the βArr1 and βArr2 mice. While these activities were observed in WT animals from both βArr strains and in the βArr1-KO mice. LSD failed to modify significantly rearing or stereotypical activities in WT and βArr1-KO mice. By comparison, LSD stimulated locomotor and rearing activities in βArr2 WT animals, while LSD was without effect in the βArr2-KO mice. Together, these results indicate that LSD-induced motor activities are regulated primarily through βArr2-mediated signaling. In this regard, βArr2 has been reported to play a similar role in morphine-stimulated hyperlocomotion [57] and amphetamine-stimulated locomotor and rearing activities in βArr2 mice [58]. In our experiments, LSD augmented motor activities in βArr1 and βArr2 WT mice and in βArr1-KO animals. By contrast, in rats this hallucinogen has been reported to inhibit ambulation and rearing [40], to stimulate locomotor activities [37,43], to induce divergent effects on motor responses [38], or to produces biphasic (i.e., inhibitory, then stimulatory) effects [39,[41][42]. In addition, a sex effect in rats has been observed with LSD [46]. In our experiments with mice, we failed to discern LSD effects attributable to sex. The inhibitory effects of LSD in rats were seen very soon after placement or entry into the open field. In our open field experiments, we did not observe any inhibition of responses with 0.3 mg/kg LSD; only stimulatory effects were evident. This absence of LSD-induced inhibitory effects in our studies could be attributed to differences in the species tested, the dose of LSD used, the test environment and apparatus, and/or the test procedure. For instance, in humans LSD's behavioral effects are well-known to be context specific [1][2][3] and the 30 min habituation to the novel environment prior to LSD injection may have reduced emotionality in our mice, such that only the stimulatory effects of LSD were evident. To determine whether the motor-stimulating effects of LSD were due to 5-HT2AR activation, MDL was used as an antagonist. When administered alone, 0.5 mg/kg MDL exerted no effects on motor performance in either βArr mouse strain. Parenthetically, MDL has been found also to be without effect on spontaneous motor activity in rats [59]. Nonetheless, 0.1 and 0.5 mg/kg MDL blocked the motor-stimulating effects of LSD in the βArr1 and βArr2 WT mice and in the βArr1-KO animals. A similar effect has been attributed to MDL's effects on LSD-stimulated motor activities in rats [43]. Hence, at the doses tested in our experiments, the present results indicate that the LSD-induced hyperactivity in the βArr mice is promoted through the 5-HT2AR. Besides motor activity, we examined the effects of LSD on head twitch, grooming, retrograde walking, and nose-poking behaviors. LSD and other psychedelics are well-known to stimulate head-twitch responses in mice [18,45,48] and this response has been proposed as a proxy for hallucinations in humans [13]. Compared to the vehicle, LSD stimulated head-twitch responses to similar extents in WT and βArr1-KO mice. This psychedelic also activated head twitches in βArr2 animals; however, responses in WT controls were significantly more robust than in βArr2-KO mice. It should be emphasized that these results were surprising since the numbers of 5-HT2AR binding sites were increased in brains of the βArr2-KO animals relative to the other genotypes and because head twitches are mediated through the this receptor [18]. Regardless, in both βArr1 and βArr2 mice, MDL reduced the numbers of head twitches to levels of the vehicle controls. These findings are consistent not only with the known action of MDL on blocking headtwitch responses to various hallucinogens [15][16][17], but also to the inability of LSD and other psychedelics to induce this response in the htr2A homozygous mutant mice [18][19]45]. Aside from head twitches in rodents, LSD stimulates grooming behaviors in cats [60] and it can stimulate or inhibit grooming in mice [46][47]. In our investigations, LSD augmented grooming in βArr1 and βArr2 WT mice, and in βArr1-KO animals. By comparison, LSD failed to prolong grooming in βArr2-KO mice beyond that of vehicle controls. In all cases, 0.1 and 0.5 mg/kg MDL returned the LSD-stimulated grooming to levels indistinguishable from the controls. Thus, antagonism of the 5-HTR2A was sufficient to restore LSD-induced grooming to baseline levels. 19 Effects of LSD were examined also on the organization of grooming. Under vehicle treatment, all mice displayed similar patterns of grooming that began with the face, progressed to the flanks, and ended with the feet or tail. LSD disorganized this sequence of events such that in the WT and βArr1-KO mice, grooming was often restricted to the flanks, feet, or tail, was nonsequential, or was fragmented. By comparison, grooming in the βArr2-KO mice was largely unaffected by LSD. MDL did not alter grooming in the WT and βArr1-KO mice, whereas it prolonged grooming and promoted twitching of the neck and back muscles in the βArr2 mutants. This antagonist blocked the LSD-disrupting effects on grooming in WT mice and it mostly restored the organization of grooming in βArr1-KO animals. The MDL-LSD combination in βArr2-KO animals produced some disturbances, but the mice typically completed the grooming sequence. Together, these results suggest that additional receptor systems may be involved in these LSDinduced grooming responses. It should be emphasized that LSD perturbs grooming also in rats [43] and cats [60]. LSD-induced states share many similarities with the early acute phases of psychosis [3]. PPI is abnormal in individuals diagnosed with schizophrenia [61] and LSD disrupts PPI in rats [43,46,53]. In βArr1 mice LSD impaired PPI in both genotypes without affecting startle or null 20 activities. Both 0.1 and 0.5 mg/kg MDL restored LSD-disrupted PPI, but only in the WT mice; an effect consistent with the action of the 5-HT2AR antagonist MDL11,939 in rats [53]. By comparison, βArr1-KO animals were unresponsive to MDL. Since LSD activates human dopamine D2 receptors [9,62], we used haloperidol as a D2 antagonist. At a dose of 0.1 mg/kg, haloperidol restored the LSD-disrupted PPI in βArr1-KO mice. Parenthetically, both 0.1 and 0.2 mg/kg haloperidol failed to rescue PPI in rats given 0.1 mg/kg LSD (s.c.) [43]; the possible reasons for this discrepancy in mice versus rats are unclear. When βArr2 mice were tested, LSD was observed to impair PPI only in WT mice. Notably, βArr2-KO mice were completely unresponsive to this psychedelic. As with βArr1 WT animals, 0.5 mg/kg MDL normalized the LSD-disrupted PPI in βArr2 WT animals. Thus, the LSD effects on PPI in the βArr mice are complex, with restoration of PPI with MDL in both strains of WT mice, normalization of PPI with haloperidol in βArr1-KO animals, and without any discernable effect in the βArr2-KO subjects. Aside from behavioral studies, LSD effects on body temperatures were evaluated. Parenthetically, LSD serves as an agonist at both 5-HT1ARs and 5-HT2ARs [10][11], and stimulation of either receptor can modify body temperatures in rats [63]. LSD is reported to increase or have no effects on temperatures in mice and to increase or decrease temperatures in rats [54]. We found LSD to alter body temperatures in a biphasic fashion in βArr1 and βArr2 WT mice, in a monophasic manner in βArr1-KO animals, and to be without effect in βArr2-KO subjects. Both MDL and WAY normalized these LSD effects in both WT strains and in βArr1-KO mice. Neither antagonist appeared to be more efficacious than the other. This finding should not be surprising since functional interactions have been described between the 5-HT1ARs and 5-HT2ARs in regulating body temperatures [61]. Although we did not test it, 5-HT2CR antagonists are known to participate in this process. Nevertheless, it appears that antagonism of 5-HT1ARs and 5-HT2ARs is sufficient to normalize the LSD-induced changes in body temperatures in βArr1 and βArr2 mice. LSD and other psychedelics are well-known for their hallucinogenic actions [1] and these responses have been attributed to 5-HT2AR agonism [12]. We observed LSD to stimulate motor activity, head twitches, grooming, retrograde walking, and nose-poking in the βArr1 and βArr2 WT mice and in βArr1-KO animals. LSD also disrupted PPI and produced diverse changes in body temperatures in these same mice. The LSD-elicited responses in βArr2-KO mice were either significantly attenuated or completely absent. In conditions where LSD produced changes in behavior, these alterations were blocked with the 5-HT2AR antagonist MDL. While these results suggest that the 5-HT2AR is an essential component for all these responses, it should be recalled that LSD exerts a plethora of actions at many GPCRs [9][10][11] and, aside from head twitch, other responses are inconsistently affected by hallucinogens [18]. Hence, it is likely that LSD's effects on the 5-HT2AR are involved in a cascade of many GPCR-signaling events mediating these varied responses. Like other GPCRs, agonist actions at the 5-HT2AR can lead to G protein-dependent and -independent signaling, the latter involves βArr [23][24][25]. While both βArr1 and βArr2 are ubiquitously expressed in adult rodent brain, expression of βArr2 mRNA is much higher than that for βArr1except in selected brain areas [64]. Thus, it may not be surprising that the LSD-elicited responses were less disturbed in the βArr2-KO than in the βArr1-KO mice, because the latter is still retained intact βArr2-mediated signaling. In this regard, it was especially intriguing that LSDinduced head twitch responses were much more robust in the βArr1 and βArr2 WT mice and in the βArr1-KO animals, than in the βArr2 mutants. Our results with LSD indicate that βArr2 may be essential for the expression of hallucinogenic-like actions. Hence, it may be possible to develop novel functionally-selective ligands that preferentially signal though G protein at the 5-HT2AR and avoid the hallucinogenic effects modulated through βArr. 22 The authors have no conflicts. BLR is listed as an inventor on a submitted patent related to 5- h Temperatures in βArr1-KO animals subjected to the same treatments. N = 9-12 mice/group.
2021-02-12T14:20:46.558Z
2021-02-06T00:00:00.000
{ "year": 2021, "sha1": "0b46bdb5a02ce73813e88ff3819895567367ca6b", "oa_license": "CCBY", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2021/02/06/2021.02.04.429772.full.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "0b46bdb5a02ce73813e88ff3819895567367ca6b", "s2fieldsofstudy": [ "Psychology", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
270265812
pes2o/s2orc
v3-fos-license
Cost-Effectiveness Analysis of the Oncotype DX Breast Recurrence Score® Test from a US Societal Perspective Background and Objectives The 21-gene assay (the Oncotype DX Breast Recurrence Score® test) estimates the 10-year risk of distant recurrence in hormone receptor positive (HR+) and human epidermal growth factor receptor 2 negative (HER2-) early-stage breast cancer to inform adjuvant chemotherapy decisions. The cost-effectiveness of the 21-gene assay compared against standard clinical-pathological risk tools alone for HR+/HER2- early-stage breast cancer was assessed using an economic model informed by evidence from randomized controlled trials. Materials and Methods A cost-effectiveness model consisted of a decision-tree to stratify patients according to their Recurrence Score (RS) results and the use of adjuvant chemotherapy, followed by a Markov component to estimate the long-term costs and outcomes of the chosen treatment. Distributions of patients and distant recurrence probabilities were derived from the TAILORx (N0) and RxPONDER (N1) trials. The model was evaluated from a healthcare payer and societal perspective. Endocrine therapy and chemotherapy use were informed using clinical expert opinion to reflect US clinical practice and were combined with Medicare drug costs (2021) to estimate the cost of treatment. Societal costs included lost productivity and patient out-of-pocket costs obtained from literature. Results The Oncotype DX test generated more quality-adjusted life-years (QALYs) (N0: 0.25; N1: 0.08) at a lower cost (N0: -$13,395; N1: -$2526) compared to clinical-pathological risk alone from a societal cost perspective. The overall conclusions from the model did not change when considering a payer perspective. The main cost drivers were avoidance of distant recurrence for N0 (-$12,578), and the cost of adjuvant chemotherapy for N1 (-$2133). Lost productivity had a major impact in the societal perspective analysis (N0: -$4607; N1: -$1586). Conclusion Adjuvant chemotherapy decisions based on the RS result led to more life year gains and lower healthcare costs (dominant) compared to using clinical-pathological risk factors alone among patients with HR+/HER2- N0 and N1 early-stage breast cancer. Introduction Endocrine therapy or chemotherapy in combination with endocrine therapy is used as first-line treatment of early-stage hormone receptor positive (HR+) and human epidermal growth factor receptor 2 (HER2) negative breast cancer. 1,2Clinical assessment to inform decisions on the choice of treatment for early-stage breast cancer considers clinical-pathological factors, which as tumor grade, size, and nodal burden. 3Breast cancer tumour gene expression profiling using multigene assays (MGAs) can contribute additional prognostic information to guide treatment decisions.Some MGAs can additionally predict the benefit of adjuvant chemotherapy, such as the 21-gene assay (the Oncotype DX Breast Recurrence Score ® test, Exact Sciences, Madison, WI, USA).The 21-gene assay measures the expression of 21 genes using reverse-transcriptase polymerase chain reaction (RT-PCR).It is used to calculate a Recurrence Score result between 0 and 100, which can estimate risk of distant recurrence with hormone therapy alone (no chemotherapy) and the magnitude of reduction in risk from adding chemotherapy treatment for patients with HR+, HER2-early invasive breast cancer. 4,5The Recurrence Score result can be used by oncologists and patients to inform the use of adjuvant chemotherapy, in combination with information on prognostic clinicalpathological factors. The clinical utility of the 21-gene assay to identify women with HR+/HER2-node-negative (N0) or node-positive (N1, 1-3 positive lymph nodes) early breast cancer who could safely forego chemotherapy treatment has been demonstrated in the TAILORx and RxPONDER Phase III randomized controlled trials (RCT). 6,7The TAILORx study recruited 10,273 patients with HR+/HER2 and node-negative early breast cancer and randomized patients with RS results 11-25 to be treated with chemo-endocrine therapy or endocrine therapy alone.It demonstrated that patients within this intermediate RS result range did not benefit from added chemotherapy in terms of distant recurrence-free survival, invasive disease-free survival or overall survival. 6The RxPONDER study included 5083 patients with HR+/HER2-node-positive early breast cancer and RS results 0-25, randomized to be treated with chemo-endocrine or endocrine therapy alone.A statistically significant treatment effect of chemotherapy was reported in terms of overall survival, invasive diseasefree survival, and distant recurrence-free interval, for premenopausal women only. 7revious economic evaluations from a US healthcare payer perspective have shown the 21-gene assay to be either cost-saving or cost-effective for patients identified as intermediate or high-risk based on clinical-pathological factors. 8,9hese analyses relied on older data from a meta-analysis in the UK. 10 There is an unmet need to examine the costeffectiveness of the 21-gene assay based on recently published evidence from the TAILORx and RxPONDER clinical trials, which have updated RS risk groups. A diagnosis of breast cancer and concomitant chemotherapy utilization are both associated with substantial burden on the patient and society as a whole in terms of out-of-pocket costs 11 and lost productivity. 12,13Economic evaluations conducted solely from a healthcare payer perspective may be insufficient to capture the full economic impact of using multi-gene assays to guide chemotherapy decisions. An economic evaluation estimated the cost-effectiveness of the 21-gene assay compared to using clinical-pathological risk factors alone to guide the use of adjuvant chemotherapy for HR+/HER2-early-stage breast cancer patients from a societal perspective in the base case. Study Population, Intervention, and Comparators A hypothetical patient cohort was divided into subgroups according to their RS result.It was assumed that the proportion of patients in each RS result category is identical for both the 21-gene assay and the comparator in the model, reflecting the same distribution of genomic risk whether or not the 21-gene assay was used.In other words, if we were to test those in the clinical-pathological risk alternative, there is no reason to believe that they would have a different genomic risk distribution compared to those tested with the 21-gene assay.The differences in costs and outcomes are determined by differences in chemotherapy assignment only. The model-based case population included women with N0 and N1 HR+/HER2-early-stage invasive breast cancer.Additional subgroups included in the model that enabled stratification of results included age (≤50 and >50 years) and clinical risk (using the definition of low clinical risk from TAILORx: tumour size ≤3cm and grade 1, ≤2cm and grade 2, or ≤1cm and grade 3) or N0 patients, premenopausal and postmenopausal status for N1 patients, and patients with micrometastases (N1mi). Model Structure A cost-effectiveness model built in Microsoft Excel included a decision-tree to stratify patients according to genomic risk and assigned adjuvant treatment (Figure 1), followed by a Markov model to simulate long-term costs and outcomes using 6-month model cycles (Figure 2), which is a similar approach to that seen in other models evaluating the 21-gene assay. 8,9The analysis used a lifetime horizon and assumed a US societal perspective, which included Medicare costs, patient out-of-pocket costs, and indirect costs of lost employment associated with a diagnosis of breast cancer or its treatment.A scenario analysis was conducted from a narrower US Medicare perspective only.The model was designed to follow best practices for modelling and used an annual rate of 3% to discount lifetime costs and outcomes in accordance with US guidelines. 14 Clinical Inputs The assignment of patients to RS subgroups in the model was based on the TAILORx, 6 and RxPONDER studies 7 for N0 and N1 patients, respectively.Clinical inputs were aligned with updated RS subgroup definitions used in TAILORx and RxPONDER (low: 0-10 (N0), 0-13 (N1); intermediate: 11-25 (N0), 14-25 (N1); high: ≥25).In some cases, older cutpoints were used in the absence of recent studies, such as with the Surveillance, Epidemiology, and End Results (SEER) database used to inform the N1mi subgroup.A survey of nine breast oncologists informed chemotherapy allocation inputs (the methodology and results of the survey are described in the text and Table S1).The incidence and cost of shortterm adverse events (AEs) of chemotherapy were taken from Wang et al 8 and are shown in Table 1. The probabilities of distant recurrence with chemo-endocrine or endocrine therapy for the overall N0 patient group with RS results 11-25 were derived from 9-year distant recurrence-free interval (DRFI) data reported in TAILORx 6 and in the TAILORx exploratory analysis for subgroups by age and clinical risk. 15In order to estimate the probability of distant recurrence for patients with RS>25 who were not randomized to endocrine therapy, baseline hazard rates from TAILORx were combined with hazard ratios reported in the NSABP B-20 study. 16No chemotherapy benefit was assumed for patients with RS<11 across all N0 subgroups. The probability of distant recurrence for N1 patients was derived from 5-year DRFI reported from RxPONDER in the 2021 SABCS presentation. 17Considering that RxPONDER was restricted to patients with RS 0-25, distant recurrence outcomes with endocrine therapy for patients with RS>25 were derived from TransATAC 18 and chemotherapy benefit for this subgroup was informed by SWOG-8814. 5Inputs and assumptions for the N1mi subgroup are described in the Online supplement.All DRFI estimates were converted to 6-month transition probabilities and applied over the lifetime horizon in the model. The estimated benefit of chemotherapy assumes that the treatment effect of endocrine therapy remains unchanged.The model did not account for possible addition of different types of endocrine therapy to the treatment regimen, or other treatments used as alternatives to chemotherapy, such as ovarian function suppression.The impact of these treatments is unknown. Mortality in the recurrence-free health state was assumed to be in line with age-adjusted mortality for the general population based on US life tables (Table S2).Mortality after distant recurrence was informed by median survival from MONARCH 2 trial. 19For the AML and CHF health states, mortality was informed using NICE technology appraisal TA552 20 and Wang et al, 8 respectively.Health-Related Quality of Life (HRQoL) and Cost Inputs The model estimated HRQoL using utility values attached to the Markov health states and decrements representing oneoff decreases in utility associated with chemotherapy adverse events (AEs) and local recurrence.The sources of utility inputs in the model are described in the Online supplement.The cost of the 21-gene assay was obtained based on the Medicare price in the US. 21For the purposes of estimating drug costs, distribution of adjuvant treatments was obtained from NCCN guidelines to approximate real-life US clinical practice. 22Inputs for health state costs were derived from literature and clinical expert opinion and are described in detail in the Online supplement.Inputs and calculations for treatment costs in the model are reported in Tables S3-S7.Societal costs have also been included in the model for patient out-of-pocket expenses associated with chemotherapy, and workdays lost secondary to chemotherapy administration and development of distant recurrence (Table S8).All costs were reported in 2021 US Dollars.The full set of model inputs and corresponding confidence intervals and distributions are reported in Table S9. Analytical Approach Cost-effectiveness analysis results were presented using incremental cost-effectiveness ratio (ICER), incremental cost, incremental quality-adjusted life-years (QALYs), life years (LYs), net monetary benefit (NMB), percentage of patients avoiding chemotherapy and percentage of patients avoiding distant recurrence.A societal perspective was used in the base case, with a narrower payer perspective presented as a scenario analysis.One-way and probabilistic sensitivity analyses (PSA) tested uncertainty in the model and scenario analyses conducted to examine key model assumptions.Decision uncertainty was illustrated using cost-effectiveness acceptability curves (CEAC).The external validity of the model was tested by comparing breast-cancer specific mortality (BCSM) and overall mortality estimated using the model against real-life data from the SEER registry.The reporting of the analysis was in line with the Consolidated Health Economic Evaluation Reporting Standards 2022 (CHEERS 2022) criteria, 23 reported in Table S30. Patient Characteristics in the Modelled Cohort Patient characteristics were in line with the pivotal phase III trials used to inform the analysis: TAILORx for N0 and RxPONDER for N1.A full description of the cohort and composition according to prognostic clinical characteristics is reported in Table S10. Base Case Cost-Effectiveness Analysis results Use of the 21-gene assay generated was dominant compared to clinical-pathological risk alone (it generated more QALYs (N0: 0.25; N1: 0.08) with a lower cost (societal perspective: N0: -$13,395; N1: -$2526; healthcare payer perspective: N0: -$8842; N1: -$453) (Tables 2 and 3).A breakdown of the model results is shown in Table 4 for N0 patients and Table 5 for N1 patients.The estimates in the model compared well against outcomes in SEER data, 24,25 as shown in Tables S28 and S29. Uncertainty Analyses The one-way sensitivity analysis identified several model parameters, which had a substantial impact on the results of the cost-effectiveness analysis, as shown in the Tornado diagrams in Figures 3 and 4. The results of the probabilistic analysis were broadly in line with the deterministic base case.Based on the CEAC, the 21-gene assay was very likely to be considered cost-effective at a willingness-to-pay of $100,000 per QALY for both N0 and N1 subgroups (>95%).Additional interpretation of the sensitivity analysis results is included in the Online supplement.Results of the PSA are presented using scatterplots in Figure A1, A3 and A5, and CEACs in Figure A2, A4 and A6. Scenario Analyses Scenario analysis results were reported in Tables S11 and S12.The 21-gene assay remained dominant compared to clinicalpathological risk alone with alternative inputs for chemotherapy benefit, DRFI, and unit costs.The model was sensitive to the estimate used for chemotherapy benefit for the RS>25 group for both N0 and N1 patients, although the 21-gene assay was still expected to be a cost-effective option if the hazard ratio is set to be equal to the upper bound of the confidence interval reported in clinical studies.The model was also sensitive to changes in chemotherapy allocation with clinical-pathological risk alone in both N0 and N1 populations; however, the 21-gene assay was also still expected to be a cost-effective option if set to the upper and lower bound of the confidence interval obtained from the clinical expert responses. Subgroup Analyses A top-level summary of results for all subgroups is reported in Table S13 and detailed results by subgroup reported in Tables S14-S27.The results in all subgroups were consistent with the results in the main N0 and N1 analyses, with the exception of the N1 premenopausal subgroup where the 21-gene assay was not cost-effective in the base case. Interpretation of Results This analysis demonstrated that the 21-gene assay is cost-saving and generated more QALYs (dominant) compared to clinical-pathological risk factors alone to guide adjuvant treatment decisions in both N0 and N1 early breast cancer.Despite no change in the proportion of patients allocated to chemotherapy in the N0 subgroup after testing, it was the targeted use of chemotherapy informed by RS result which led to reductions in the probability of local and distant recurrence, which ultimately resulted in both QALY gains and long-term cost savings.In the combined N1 group, the result was primarily driven by substantial reduction in estimated chemotherapy use for postmenopausal women with RS 0-25 (66% of the N1 cohort) who are treated with comparatively safer endocrine therapy without increasing the risk of The endpoint of interest for the one-way sensitivity analyses was net monetary benefit (NMB), which is the product of the threshold willingness-to-pay per QALY in the US ($100,000) and incremental QALYs gained, less incremental cost.NMB is a more appropriate endpoint to measure uncertainty in the presence of negative ICERs, which are difficult to interpret as they can represent both a dominant (higher incremental QALYs and lower incremental cost) and dominated (lower incremental QALYs and higher incremental cost) result. 478 recurrence.The analysis showed that the 21-gene assay was unlikely to be cost-effective in the premenopausal N1 subgroup.This result was driven by the findings from the RxPONDER study, which showed that premenopausal patients with RS≤25 benefit from chemotherapy irrespective of their RS result.Based on the clinical survey, the overall proportion of patients allocated to chemotherapy was expected to reduce from 76% to 65% with the 21-gene assay, which meant that the cost savings from chemotherapy sparing were offset by increased cost of treatment and decreased QALYs due to distant recurrence.The model did not consider alternative treatments, such as ovarian function suppression, which may be preferred by physicians and patients in order to avoid the risk of chemotherapy adverse events in this subgroup.The results were impacted by uncertainty in the input values obtained from different literature sources and the unknown benefit of ovarian suppression as an alternative to chemotherapy.The model results were consistent from both a healthcare payer perspective and a societal perspective, with additional cost savings derived from avoided patient out-of-pocket expenditures and lost days at work associated with adjuvant chemotherapy and distant recurrence.Sensitivity analyses showed that inputs for the chemotherapy benefit and DRFI in the high-risk RS subgroup had the largest impact on the results and the probabilistic analysis demonstrated that the 21gene assay had a high probability of cost-effectiveness assuming a threshold for willingness to pay of $100,000 per QALY.A large variation as observed in the chemotherapy allocation inputs obtained from clinical expert opinion, although scenario analyses showed that the model conclusions remained the same if alternative values were used. Study Limitations The cost-effectiveness analysis was based on an economic model with extrapolations informed using published data and assumptions, which inherently involves uncertainty.Although uncertainty associated with chosen parameter values and data sources was robustly analysed using one-way sensitivity analysis, probabilistic analysis, uncertainty emanating from the choice of model structure and type of model may remain in the form of structural uncertainty. In the absence of clinical studies which included the full population and outcomes of interest, the cost-effectiveness analysis was informed using multiple studies for each patient subgroup.The benefit of chemotherapy in the RS>25 subgroup based on NSABP B-20 for N0 and SWOG-8814 for N1 was uncertain and the impact of this on model uncertainty was tested in scenario analysis showing no impact on study conclusions.For the N1 subgroup, DRFI estimates for the endocrine therapy arm were not reported for the pre-defined RS subgroups in RxPONDER: 0-13 and 14-25 (only the absolute chemotherapy benefit was reported for the strata).Thus, data for the overall 0-25 subgroup were applied to both, which may have underestimated the benefit of chemotherapy sparing for patients with the lowest risk of recurrence and can thus be considered a conservative case. The investigators also had to rely on clinical expert opinion for key parameters, such as probability of chemotherapy based on RS, which had a substantial impact on the results.The estimated proportion of patients allocated to chemotherapy varied significantly across responses, which reflects variation in clinical practice across centers in the US.However, the SEER studies recruited patients diagnosed with chemotherapy prior to 2016, and therefore treatment decisions reported in these studies did not capture the changes in clinical practice triggered by the publication of results from pivotal phase III TAILORx and RxPONDER trials.In addition, the SEER database has previously documented issues with under-reporting of chemotherapy. 26Despite the uncertainty associated with using clinical expert opinion, this was deemed to be the data source, which best reflects current clinical practice in the US.This uncertainty was explored in scenario analyses reported in the Online Supplement.Alternative inputs based on studies informed by SEER registry were tested in scenario analyses, with no changes to the conclusions for either N0 or N1 subgroups. The applicability of the analysis results for certain patient groups is uncertain due to potential under-recruitment of racial and ethnic minorities that is common in clinical trials.A further description of the racial and ethnic representativeness of the TAILORx and RxPONDER studies is in the Online supplement.The impact of race and ethnicity on outcomes of treatment decisions guided by the 21-gene assay in the TAILORx and RxPONDER have been explored in recent studies, which showed no differences in the performance of the assay across race subgroups. 27,28Worse outcomes with endocrine therapy alone were observed for black patients, but no statistically significant differences in chemotherapy benefit were observed across RS result subgroups.The 21-gene assay remained prognostic and predictive of chemotherapy benefit across racial and ethnic groups.There is scope for future research to examine whether the cost-effectiveness of the 21-gene assay differs across race and ethnic subgroups. Study Strengths A systematic review of cost-effectiveness analyses of the 21-gene assay test by Wang et al criticized previously published analyses for ignoring the role of clinical-pathological factors in their evaluation of the multigene assays. 29The analysis described in this article addressed this point by considering N0 and N1 populations separately, recognizing the importance of nodal status as a prognostic factor for distant recurrence and its impact on adjuvant treatment decisions.The population was stratified further based on age and clinical risk subgroups defined in the TAILORx exploratory analysis. 15The model considered menopausal status for N1, which is a predictor of chemotherapy benefit based on the results of the RxPONDER trial.Stratification according to observed clinical risk factors allowed the investigators to assume equal chemotherapy allocation for all patients in the clinical-pathological alternative of the model within each subgroup.The assumption of the predictive ability of the 21-gene assay was highlighted as a source of uncertainty by Wang et al, with most of the studies which assumed a predictive ability concluding that the 21-gene assay is cost-effective.The analysis reported in this article considered chemotherapy benefit separately in each subgroup.Since the publication of the systematic review by Wang et al, TAILORx and RxPONDER randomized controlled trials demonstrated robust evidence of the absence of a chemotherapy benefit for patients with RS≤25 for all N0 and postmenopausal N1 patients, suggesting that the 21-gene assay is able to identify patients who can be safely spared chemotherapy and improve treatment decision-making in this setting.The degree of chemotherapy benefit for patients with RS>25 is uncertain in both N0 and N1 analyses, which was tested in scenario analyses.This is due to large confidence intervals for RS>25 in NSABP B-20 and SWOG-8814 and due to the design of TAILORx and RxPONDER, which did not include randomization to treatment among those with a RS>25.The 21-gene assay was still cost-effective in scenario analyses with reduced chemotherapy benefit for the RS>25 group. Comparison to Published Evidence Previous economic evaluations demonstrated the potential cost-effectiveness of the 21-gene assay to guide chemotherapy decisions for breast cancer patients in the US who were classified according to clinical-pathological risk using PREDICT. 8,9oth models were informed using registry data combined with assumptions for the treatment effect of chemotherapy for different clinical risk and RS result subgroups.The analysis described in this article took a different approach by presenting the analysis by nodal status informed by up-to-date evidence from randomized phase III trials to inform the long-term effectiveness of chemotherapy decisions.In terms of study design, the model described in this article more closely resembles the design in Kunst et al, which incorporated chemotherapy assignment probabilities from published literature reflecting clinical practice at the time of publication. 9Kunst et al showed that the 21-gene assay is cost-effective for patients with intermediate and high clinical risk based on old RS cut-points but reported that the ICER is above the cost-effective range for patients with low clinical risk.Although all N0 subgroups were dominant in the analysis reported in this paper, a similar trend was observed, with substantially larger cost savings and QALY gains for patients with high clinical risk.The results of the analysis reported here was in line with cost-effectiveness analyses conducted from a UK NHS and personal social services perspective. 30,31 Recommendations for Research and Policy This model-based analysis incorporated the latest evidence from the TAILORx and RxPONDER trials, and has the potential to influence recommendations for the use of the 21-gene test in the US.The reduction in chemotherapy use resulting from MGA use for patients with a low risk of distant recurrence is likely to reduce pressure on resource-constrained oncology units.Moreover, this analysis showed that reduced chemotherapy use and avoidance of distant recurrence as a result of adjuvant treatment decisions utilizing the 21-gene assay substantially reduces patient out-of-pocket costs and lost productivity.Decision-makers need to consider patient and wider societal impacts in addition to healthcare costs when making their recommendation on the use of the 21-gene assay. Due to key evidence gaps to inform the model, US-specific prospective or retrospective observational studies to examine the change in chemotherapy decisions resulting from the use of the 21-gene assay are needed to improve the specificity of these estimates by nodal burden, clinical-pathological risk, and menopausal status, particularly given the differential chemotherapy benefit across subgroups as demonstrated in the TAILORx exploratory analyses and the RxPONDER study.Further studies using real-world evidence to examine the distribution and cost of adjuvant chemotherapy could help further reduce uncertainty for cost estimates informing the model. Figure 1 Figure1In the decision-tree component of the model, RS result subgroups were defined using cut-offs used in the TAILORx study for N0 (0-10, 11-25, 26-100) and RxPONDER for N1 (0-13, 14-25, 26-100).In the 21-gene assay alternative of the model, chemotherapy assignment was dependent on the subgroup.In the clinicalpathological risk alternative of the model, it differed according to patient age, clinical risk, and menopausal status.Once patients have been assigned their RS result and assigned adjuvant treatment, they proceed to the respective part of the Markov model. Figure 2 Figure 2 Markov model structure.The model included five health states; the arrows depict patient movement between health states in each model cycle.Patients can move to death from any health state.Patients enter the Markov portion in the "Recurrence-free" health state, and the probability of transition to distant recurrence, AML and CHF is conditional on the assigned adjuvant treatment, clinical risk, and RS category (if known). Figure 3 Figure 3Tornado diagram reporting the results of one-way sensitivity analyses for the combined N0 population.The endpoint of interest for the one-way sensitivity analyses was net monetary benefit (NMB), which is the product of the threshold willingness-to-pay per QALY in the US ($100,000) and incremental QALYs gained, less incremental cost.NMB is a more appropriate endpoint to measure uncertainty in the presence of negative ICERs, which are difficult to interpret as they can represent both a dominant (higher incremental QALYs and lower incremental cost) and dominated (lower incremental QALYs and higher incremental cost) result. Figure 4 Figure 4 Tornado diagram reporting the results of one-way sensitivity analyses for the combined N1 population. Table 1 Chemotherapy-Related Adverse Events Table 2 Incremental Cost-Effectiveness of the 21-Gene Assay Compared to Clinical-Pathological Risk Alone N0 Population Table 3 Incremental Cost-Effectiveness of the 21-Gene Assay Compared to Clinical-Pathological Risk Alone Combined N1 Population Table 4 Breakdown of Cost for the 21-Gene Assay Compared to Clinical-Pathological Risk Alone N0 Population Note: *Included in societal cost perspective analysis only.
2024-06-06T15:14:55.084Z
2024-06-01T00:00:00.000
{ "year": 2024, "sha1": "d1e2028bd94b1a8ee6d0ba525b313eae9b2d1ca3", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=99655", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "bb783840a4c8d332e5ccc42eb8dafbedf7cfb160", "s2fieldsofstudy": [ "Medicine", "Economics" ], "extfieldsofstudy": [ "Medicine" ] }
263887045
pes2o/s2orc
v3-fos-license
Joint cortical registration of geometry and function using semi-supervised learning Brain surface-based image registration, an important component of brain image analysis, establishes spatial correspondence between cortical surfaces. Existing iterative and learning-based approaches focus on accurate registration of folding patterns of the cerebral cortex, and assume that geometry predicts function and thus functional areas will also be well aligned. However, structure/functional variability of anatomically corresponding areas across subjects has been widely reported. In this work, we introduce a learning-based cortical registration framework, JOSA, which jointly aligns folding patterns and functional maps while simultaneously learning an optimal atlas. We demonstrate that JOSA can substantially improve registration performance in both anatomical and functional domains over existing methods. By employing a semi-supervised training strategy, the proposed framework obviates the need for functional data during inference, enabling its use in broad neuroscientific domains where functional data may not be observed. The source code of JOSA will be released to the public at https://voxelmorph.net. Introduction Image registration is fundamental in medical image analysis.Deformable image registration establishes spatial correspondence between a pair of images through a dense spatial transformation that maximizes a similarity measure.Approaches for both volume-based or surface-based methods have been thoroughly studied (Maintz and Viergever, 1998;Oliveira and Tavares, 2014).Surface-based cortical registration extracts geometric features representing brain anatomical structure, and solves the registration problem on the surface. Cortical registration strategies achieve high accuracy in aligning complex folding patterns of the human cerebral cortex (Davatzikos and Bryan, 1996;Fischl et al., 1999b), and often improve the statistical power of group functional analysis of the brain (van Atteveldt et al., 2004;Frost and Goebel, 2012).There methods are commonly driven by geometric features that describe cortical folding patterns, such as sulcal depth and curvature (Fischl et al., 1999b;Yeo et al., 2010;Conroy et al., 2013), that are often assumed to predict function.Functional regions are thus commonly aligned via anatomical registration.However, functional variability of anatomically corresponding areas within subjects has also been widely reported (Steinmetz and Seitz, 1991;Fischl et al., 2008;Frost and Goebel, 2012).Regions with different functional specializations may not be accurately aligned even when a perfect anatomical registration is achieved. We propose a diffeomorphic cortical registration framework that jointly describes the relationship between geometry and function.We build on recent unsupervised spherical registration strategies (Balakrishnan et al., 2019;Cheng et al., 2020a) and model a joint deformation field shared by geometry and function to capture the relatively large difference between subjects.We introduce deformation fields that describe relatively small variations between geometry and function within each subject.To avoid the biases of existing anatomical templates, we also jointly estimate a population-specific atlas during training (Dalca et al., 2019).We demonstrate this method via a semi-supervised training strategy using task functional magnetic resonance imaging (fMRI) data.In contrast to the term "unsupervised" used in the literature (Balakrishnan et al., 2019;Cheng et al., 2020a), here we borrow the term "semi-supervised" to describe our strategy to highlight that auxiliary maps, such as functional data, can be included in training but are not required during training.We find that this strategy yields better registration accuracy in aligning both folding patterns and functional maps in comparison to existing approaches.In summary, 1. we propose a learning-based registration framework that jointly models the relationship between geometry and function, and estimates a multi-model population-specific atlas. 2. we develop a semi-supervised training strategy that uses task fMRI data to improve functional registration but without a need for task fMRI data during inference.This semi-supervised training framework can also be extended to any auxiliary data that could be helpful to guide spherical registration but is difficult to obtain during inference, such as parcellations, architectonic identity, transcriptomic information, molecular profiles. 3. we demonstrate experimentally that the proposed framework yields improved registration performance in both anatomical and functional domains. Model-based spherical registration Deformable registration has been extensively studied (Fischl et al., 1999b;Yeo et al., 2010;Sabuncu et al., 2010;Guntupalli et al., 2016;Robinson et al., 2014;Vercauteren et al., 2009;Nenning et al., 2017;Avants et al., 2008;Beg et al., 2005).Typical strategies often employ an iterative approach that seeks an optimal deformation field to warp a moving image to a fixed image.Methods usually involve optimization of a similarity measure between two feature images, such as the mean squared error (MSE) or normalized cross correlation (NCC), while regularizing the deformation field to have some desired property, such as smoothness and/or diffeomorphism.Widely used techniques for cortical surface registration map the surface onto a unit sphere and establish correspondence between feature maps in the spherical space (Fischl et al., 1999a).Conventional approaches, such as FreeSurfer (Fischl et al., 1999b), register an individual subject to a probabilistic population atlas by minimizing the convexity MSE weighted by the inverse variance of the atlas convexity, in a maximum a posteriori formulation.These anatomical registration methods have been adapted to functional registration by minimizing MSE on functional connectivity computed from fMRI data (Sabuncu et al., 2010).Spatial correspondence can be also maximized in a non-diffeomorphic manner by finding local orthogonal transforms that linearly combine features around each local neighborhood (Guntupalli et al., 2016(Guntupalli et al., , 2018)).Recent discrete optimization approaches iteratively align local features using spherical meshes from low-resolution to high-resolution (Robinson et al., 2014(Robinson et al., , 2018)). Diffeomorphic registration enables invertibility and preserves anatomical topology using an exponentiated Lie algebra, most often assuming a stationary velocity field (SVF) (Ashburner, 2007;Vercauteren et al., 2009).These strategies were extended to the sphere by regularizing the deformation using spherical thin plate spline interpolation (Yeo et al., 2010).Several methods align functional regions, for example using Laplacian eigen embeddings computed from fMRI data (Nenning et al., 2017).These methods are successful but solve an optimization problem for each image pair, resulting in a high computational cost. Unsupervised registration methods employ a classical loss evaluating image similarity and deformation regularity, thus forming an end-to-end training pipeline (de Vos et al., 2019;Balakrishnan et al., 2019).Semi-supervised methods employed additional information, like segmentation maps, to guide registration without requiring them during inference (Balakrishnan et al., 2019).Recent methods extend this strategy to the spherical domain by parameterizing the brain surfaces in a 2D grid that accounts for distortions (Cheng et al., 2020a) or directly on the sphere using spherical kernels (Zhao et al., 2021). Learning-based methods improved registration run time substantially during inference, while achieving superior or similar accuracy relative to iterative methods.However, these methods do not account for variations between geometry and function within a subject. Method We propose Joint Spherical registration and Atlas building (JOSA), a method for registration with simultaneous atlas construction, that models the anato-functional differences not only between subjects but also within each subject.Fig. 1 shows the graphical representation for the proposed generative model.Let A be an unknown population atlas with all geometric and functional cortical features of interest.We propose a generative model that describes the formation of the subject geometric features I g and functional features I f by first warping the atlas A by a subject deformation field ϕ j .This characterizes the differences between subjects, and results in a joint multi-feature image I j .Geometric feature I g is formed given an additional field ϕ g that deforms the geometric features in I j , and similarly for I f and ϕ f .Deformation prior.Let ϕ i j , ϕ i g , ϕ i f be the joint, geometric, and functional deformation fields for each subject i, respectively.All variables in the model are subject-specific, except for the global atlas A, and we omit i for our derivation.We impose the deformation priors Generative model where u j is the spatial displacement for ϕ j = Id + u j , ∇u j is its spatial gradient, and ūj = 1/n i u i j , and like-wise for u g and u f .The gradient term encourages smooth deformations, while the mean term encourages an unbiased atlas A by encouraging small average deformation over the entire dataset (Dalca et al., 2019). Data likelihood.We treat the latent joint image I j as a noisy warped atlas, where N (•; µ, Σ) is the multivariate Gaussian distribution with mean µ and covariance Σ, • represents spatial transformation, σ represents additive noise, and I is identity matrix.The geometric feature image I g is then a noisy observation of a further-moved joint image I j : Therefore, the complete geometric image likelihood is then with the full derivation provided in the Appendix.We use a similar model for the functional image I f . Learning Let Φ = {ϕ j , ϕ g , ϕ f } and I = {I g , I f }.We estimate Φ by minimizing the negative log likelihood, Neural network approach and semi-supervision with task fMRI data We use a neural network to approximate the function h θ,A (I) = Φ, where θ are network parameters.Fig. 2 shows the proposed network architecture.To work with surface-based data, the cortical surface of each subject is inflated into a sphere and then rigidly registered to an average space using FreeSurfer (Fischl, 2012).Geometric and functional features are parameterized onto a 2D grid using a standard conversion from Cartesian coordinates to spherical coordinates, resulting in a 2D image for each input (Cheng et al., 2020a). The network takes such a parameterized geometric image as input and outputs three velocity fields, each followed by an integration layer generating the corresponding deformation field.The joint deformation ϕ j , which models the relatively large inter-subject variance, is shared among and composed with individual deformations ϕ g and ϕ f .We note that the separation of ϕ g from ϕ f enables us to explicitly model the structural-functional differences.This is of critical importance from a neuroscientific perspective because it has been shown that some brain structure predicts their function well, and others do not (Fischl et al., 2008).The losses L geom and L f unc shown in Fig. 2 represents the geometric part and the functional part of the data fidelity terms in Eq. ( 5).They are evaluated in the atlas and subject space, which also helps avoid atlas drift (Aganj et al., 2017) during atlas construction.L reg represents the regularization and centrality terms which encourages smooth deformations and an unbiased estimation of the atlas. In this study, we learn network parameters and use task fMRI data in a semi-supervised manner.As shown in the green block in Fig. 2, the task fMRI data and the corresponding functional atlas are not input into the neural network.Rather they are used only for evaluating the functional terms in the loss function ( 5).This obviates the need for functional data during inference, as the deformation fields can be inferred only using geometric features.The proposed framework is flexible in the sense that augmentation of data modality can be easily integrated into the framework for simultaneous multi-modality registration. Implementation We implemented a Unet-like network based on the core architecture in VoxelMorph (https://voxelmorph.net) (Balakrishnan et al., 2019;Dalca et al., 2019).We used a 5layer encoder with [128,256,384,512,640] filters and a symmetric decoder followed by 2 more convolutional layers with [64,32] filters.Each layer involves convolution, maxpooling/up-sampling, and LeakyReLU activation.The spherical parameterization leads to denser sampling grids for regions at higher latitudes.Thus we performed prior and distortion corrections identical to that described in (Cheng et al., 2020a).In short, weights proportionally to sin(θ), where θ is the elevation, were used to correct the distortion.(Cheng et al., 2020a) also found that varying the locations of the poles in the projection had little impact on the resulting registration.The parameterized images were standardized identically but separately for structural and functional features, where the median was subtracted for each feature image followed by a division of standard deviation. During training, we randomly sampled the training data into mini-batches with a batch size of 8.For each batch, we augmented the data by adding Gaussian random deformations with σ = 4 with proper distortion correction at each spatial location.We also augmented the data by adding Gaussian noise with σ = 1 for geometric features and σ = 6 for functional features.We used the Adam optimizer (Kingma and Ba, 2014) with an initial learning rate of 10 −3 .The learning rate was scheduled to decrease linearly to 10 −4 within the first 500 epochs and then reduced by a factor of 0.9 if the validation loss does not decrease after every 100 epochs.The relative weights between functional loss and geometric loss were set to 0.7:0.3empirically.We set the regularization hyperparameter λ j in Eq. ( 5) for the joint (large) deformation to be 0.1 and λ g and λ f for the individual (small) deformations to be 0.2.The atlases, as part of the network parameters, were initialized using Gaussian random noise and automatically learned during training.We used TensorFlow (Abadi et al., 2016) with Keras front-end (Chollet, 2018) and the Neurite package (Dalca et al., 2018b), and all experiments were conducted in the Dell Workstation with dual Intel Xeon Silver 6226R CPUs and an Nvidia RTX6000 GPU.The source code of JOSA will be released to the public at https://voxelmorph.net. Experiments Language task fMRI data.We used a subset consisting of 150 subjects from a largescale language mapping study (Lipkin et al., 2022).A reading task was used to localize the language network involving contrasting sentences and lists of nonwords strings in a standard blocked design with a counterbalanced condition order across runs.Each stimulus consisted of 12 words/nonwords, and stimuli were presented one word/nonword at a time at the rate of 450 ms per word/nonword.Each stimulus was preceded by a 100 ms blank screen and followed by a 400 ms screen showing a picture of a finger pressing a button, and a blank screen for another 100ms, for a total trial duration of 6s.Experimental blocks lasted 18s, and fixation blocks lasted 14s.Each run (consisting of 5 fixation blocks and 16 experimental blocks) lasted 358s.Subjects completed 2 runs.Subjects were instructed to read attentively and press a button whenever they saw the finger-pressing picture on the screen.Structural and functional data were collected on a 3T Siemens Trio scanner.T1-weighted images were collected in 176 sagittal slices with 1 mm isotropic resolution.Functional data (BOLD) were acquired using an EPI sequence with 4 mm thick near-axial slices, 2.1 mm × 2.1 mm in-plane resolution, TR = 2,000 ms, and TE = 30 ms.We preprocessed the data using FreeSurfer v6.0.0 as described in (Lipkin et al., 2022).The subjects' surfaces were reconstructed from the T1 images (default recon-all parameters) and data were analyzed on the subjects' native ("self") surface.Data were not spatially smoothed.A "sentence vs. nonword" contrast t-map was generated for each subject using first-level GLM analysis based on the blocked design.We randomly split the data into a training set with 110 subjects and a validation set with the remaining 40 subjects. Baselines.We use FreeSurfer (Fischl et al., 1999b), and SphereMorph (Cheng et al., 2020a) as surface registration baseline.For FreeSurfer registration, we ran mris_register for each validation subject to register them to the FreeSurfer average space.For SphereMorph, we trained the network to predict a single deformation field, and used the average feature maps as the fixed atlas.At test time, we used deformation field generated based on the subject's geometric features to warp each subject's functional data to the atlas space. Evaluation.Qualitatively, we computed the group mean images of both the geometric and the functional data in the validation set after registration.We then visualized them by superimposing the functional group mean map with the curvature group mean map using Freeview (Fischl, 2012).Quantitatively, we measured the registration accuracy as the correlations between the registered individual data and the group mean (Cheng et al., 2020a).Specifically, let c k g = corr(I k g , 1/N k I k g ) be the Pearson correlation of the resulting image I k g to the group mean image 1/N k I k g for subject k and geometric feature g.We then assess the pair-wise correlation improvement c k g,JOSA −c k g,JOSA for our proposed method JOSA, and similarly for FreeSurfer and SphereMorph (i.e., the correlation difference for the same subject after and before registration). Results Qualitatively, Fig. 3(a) shows the group mean maps after registration.A better alignment leads to a cleaner group mean image with higher peaks and sharper transitions from task active to non-active regions.FreeSurfer and SphereMorph significantly improved the alignment of folding patterns but yield only marginal improvement in functional alignment.In contrast, JOSA achieved a substantially better alignment in both folding patterns and function.In particular, the predominant language region in the superior temporal gyrus shows a substantially stronger response and clearer functional boundaries, and we find additional active language-responsive regions around inferior frontal regions near Broca's area (arrows in Fig. 3(a)), indicating an improved structural and functional alignment. Quantitatively, Fig. 3(b) and 3(c) show the pair-wise registration improvement in correlation for each method.All three methods show substantial improvement in aligning geometry, while JOSA yielded the highest correlation increase over the rigid registration by a substantial and statistically significant margin (p = 1.85 × 10 −8 using one-tailed Wilcoxon Signed Rank Test).Fig. 3(c) confirmed the qualitative observation that FreeSurfer and SphereMorph marginally improved functional registration, whereas JOSA achieved a substantial and statistically significant improvement due to its separate modeling of geometry and function (p = 2.33 × 10 −8 ).In addition, to illustrate the diffeomorphic property of de-formation fields in JOSA, we computed the percentage of negative Jacobians for the testing subjects.On average, only 0.2% of the spatial locations present negative Jacobians. Assessing computational atlases is ill-defined and often depends on their downstream utility.Fig. 4 visually compares an unlearned atlas based on FreeSurfer registration and the JOSA-learned atlas.We find that the JOSA atlas provides more anatomical definition, supporting registration with higher resolution and finer details, which may also contribute to the improved performance of the proposed method.Moreover, we conducted the ablation experiment to explore the effect of atlases.We trained two networks using identical structure as with JOSA but with the two different atlases shown in Fig. 4. Results show that the substantial improvement for geometry is mainly attributable to a better atlas, whereas the improvement in registration of function is primarily due to the separate modeling of deformation between geometry and function, as illustrated in Fig. 5 in the appendix. Discussion We developed JOSA, a joint geometric and functional registration framework that also estimates a population-specific atlas.JOSA yields superior performance in registering folding patterns and task-active regions in comparison to other traditional or learning-based methods.Using a semi-supervised training strategy, JOSA lifts the burden of acquiring functional data during inference, which promises to enable easier translation to scientific studies or clinical applications. The current approach is limited by the size of the dataset as well as the single-task contrast used in the study.In particular, the hyperparameters were selected based on the validation result, which may be sub-optimal and potentially impact the model's generalizability.We plan to expand our framework to a broad range of functional data with a larger number of subjects to better explore the relationship between geometry and function.We also plan to invest our effort to more thoroughly characterize the learned atlas and analyze its contribution in spherical registration. Figure 1 : Figure 1: Graphical representation of the generative model.Circles are random variables.Rounded squares indicate parameters.Shaded quantities are observations.The big plate represents replication.A represents the global atlas, I the input image, ϕ deformation field.The subscript j, g, and f stand for joint, geometry, and function, respectively. Figure 2 : Figure 2: Network architecture and preprocessing pipeline.The network takes the geometric features from the subject and outputs one joint and two separate deformation fields for registration of folding patterns and functions separately.Task fMRI data were used for evaluating the functional loss only in a semi-supervised manner. Figure 3 : Figure 3: Comparison of registration result.(a) Average language activation map across test subjects superimposed on average curvature map.The curvature map is shown in dark gray and the thresholded language task activation map is shown using a heat map; (b) box-plot of pair-wise correlation of individual curvature map to the group mean; (c) Counterpart of (b) but for function. Figure 4 : Figure 4: Atlas comparison.Curvature is shown on an inflated surface for each atlas.
2023-03-06T09:59:52.686Z
2023-03-02T00:00:00.000
{ "year": 2023, "sha1": "28d11a009ecaef45cc320d16c78df3ee9c1c6c34", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "ArXiv", "pdf_hash": "8bd7dbc46b5756252d066128d0764392bd07ed0f", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [ "Engineering", "Computer Science", "Biology" ] }
13994439
pes2o/s2orc
v3-fos-license
Fibromyalgia Syndrome: A Case Report on Controlled Remission of Symptoms by a Dietary Strategy A 34-year-old woman suffered from significant chronic pain, depression, non-restorative sleep, chronic fatigue, severe morning stiffness, leg cramps, irritable bowel syndrome, hypersensitivity to cold, concentration difficulties, and forgetfulness. Blood tests were negative for rheumatic disorders. The patient was diagnosed with Fibromyalgia syndrome (FMS). Due to the lack of effectiveness of pharmacological therapies in FMS, she approached a novel metabolic proposal for the symptomatic remission. Its core idea is supporting serotonin synthesis by allowing a proper absorption of tryptophan assumed with food, while avoiding, or at least minimizing the presence of interfering non-absorbed molecules, such as fructose and sorbitol. Such a strategy resulted in a rapid improvement of symptoms after only few days on diet, up to the remission of most symptoms in 2 months. Depression, widespread chronic pain, chronic fatigue, non-restorative sleep, morning stiffness, and the majority of the comorbidities remitted. Energy and vitality were recovered by the patient as prior to the onset of the disease, reverting the occupational and social disabilities. The patient episodically challenged herself breaking the dietary protocol leading to its negative test and to the evaluation of its benefit. These breaks correlated with the recurrence of the symptoms, supporting the correctness of the biochemical hypothesis underlying the diet design toward remission of symptoms, but not as a final cure. We propose this as a low risk and accessible therapeutic protocol for the symptomatic remission in FMS with virtually no costs other than those related to vitamin and mineral salt supplements in case of deficiencies. A pilot study is required to further ground this metabolic approach, and to finally evaluate its inclusion in the guidelines for clinical management of FMS. INtRoDUCtIoN Fibromyalgia syndrome (FMS) is a challenging, complex, heterogeneous, chronic, and often disabling disorder (1,2). Its pathophysiology is still poorly understood (1,3,4). Chronic musculoskeletal widespread pain, fatigue, non-restorative sleep, mood disturbances, and cognitive impairments characterize this condition (1,2). Furthermore, a constellation of comorbidities, only apparently not connected, afflicts the patients (2)(3)(4)(5)(6)(7)(8), and have a clear common denominator in serotonin . The symptoms can range from mild to severe, determining in the worst cases an invalidating condition that dominates daily life (9). They may lead to an occupational and social disability with associated direct and indirect economic costs (10). Even if pain is the core symptom in FMS, non-painful symptoms may also impact the quality of life of patients. Recognizing those symptoms may be difficult in the absence of other apparent organic diseases. This makes FMS a real challenge for physicians and healthcare professionals. We report the first case of controlled remission of symptoms in FMS, following a novel metabolic approach. The therapeutic protocol is a strict diet, focused on the withdrawal of food components that may interfere with the absorption of l-tryptophan (Trp), 5-HT precursor (11). patient presentation The patient is a 34-year-old woman, body mass index 18, Caucasian, high level instruction. old past History The patient's past history, ex post relevant for FMS differential diagnosis, includes irritable bowel syndrome-constipation (IBS-c), bloating, dismenorrhea experienced since adolescence, Raynaud's phenomenon, trapezium contractures, leg and foot cramps, especially at night or waking up in the morning. past History The onset of lower back pain, restless legs, and morning stiffness occurred few months after a surgery. The symptoms were first described as mild in severity, particularly concerning pain, and erratic. One-year later, lower back pain and hip pain forced the patient to bed rest. Non-steroidal anti-inflammatory drugs (i.e., ibuprofen) and muscle relaxants (i.e., thiocolchicoside) were prescribed by the primary care clinician, and led to mild effects. Magnetic resonance imaging and X-ray investigations revealed a lumbar disk hernia and no lesions or abnormalities at hips. One-year later, during autumn temperatures decrease a further episode of lower back pain and stiffness occurred forcing the patient to bed rest for more than 2 weeks. Previous treatment had no effectiveness and corticosteroid drug (i.e., prednisone) led to no appreciable relief. Pain was in part relieved by gabapentin, but only with slow dynamics and with collateral effects of suicidal thoughts and mental confusion. Further magnetic resonance and X-ray investigations confirmed the previous diagnosis with no new data to explain relapse worsening. Similar symptoms of lower back pain during the following 2 years were attributed to the same cause. These episodes recurred three to four times a year, affecting life quality and mobility, and forcing patient to bed. Intriguingly, the worst episodes appeared to correlate to decreasing temperatures of the autumn. Recent past History The symptoms increased slightly, but progressively with unpredictable and fluctuating nature. Morning stiffness required more than 40 min to get up while awakening. Fatigue unrelieved by rest, low back pain, migrant aches in the joints, musculoskeletal widespread pain, short-term memory loss, concentration difficulties, and forgetfulness were the major symptoms. The evaluation of her thyroid function did not reveal any abnormality [i.e., thyrotropin (THS) was within the normal range of 0.270-4.200 μU/ml, with a value of 2.54]. A modest improvement was observed during summer and hot weather conditions, but the unpredictable character of the symptoms made the patient feeling insecure and anxious. These had considerable impact on the everyday life, affecting social interaction and professional performance. She appeared healthy when compared with others: being doubted, because of the invisible nature of her pain, had an additional negative impact on the patient's well-being. Differential Diagnosis A deterioration of the patient's conditions began with a severe worsening of morning stiffness and lower back pain, leading to disabling conditions, and forcing the patient at bed rest with severe aches. This was described as a "torture-like experience, " without any respite for a period of 48 h. The concomitant onset of such a pain to both hips and to the right shoulder led to investigate for rheumatic diseases and to the hypothesis of FMS. On a visit by a second specialist in rheumatology, tender points were accessed again, and the differential diagnosis of FMS was made. This was based on the presence of tender point sensitivity (14/18), widespread chronic pain for longer than 3 months, morning stiffness, non-restorative sleep, depression, anxiety, leg and foot cramps, chest pain, tachycardia, hypersensitivity to cold, cognitive impairment as forgetfulness and low concentration, irritable bowel syndrome-constipation (IBS-c), prickling sensations at fingers and toes, bloating, and hyperhidrosis. The ineffectiveness of pharmacological therapies in FMS came to patient's knowledge (1). The patient refused the proposed muscle relaxant drug (i.e., tizanidine) on the basis of its unproven effectiveness (1), and she also refused the proposed selective serotonin-norepinephrine re-uptake inhibitor (SNRI) (i.e., duloxetine) (12) on the basis of awareness of collateral effects (13) and development of pharmacological addiction. The patient generally feared collateral effects of the drug treatments, and she was rather interested in the novel metabolic approach for the symptomatic remission in FMS (11). Guidelines (11) The therapeutic protocol is a strict diet. It was devised to facilitate Trp absorption, and thus guarantee its bioavailability as a substrate for 5-HT synthesis. In order to sustain 5-HT synthesis, it is mandatory to remove molecules that could negatively affect the fate of Trp in the gastrointestinal tract. The core of this approach is the exclusion of some carbohydrates from the diet and the proper intake of Trp with food (11). Because of fructose is a high reactive sugar (14), limiting the intake of fructose as much as possible is the essential point, including fructose chains, such as fructans and inulins, and some other molecules that do not have specific transport systems (e.g., sorbitol). Glutamate and aspartame should also be excluded (11). Diet The patient's diet includes eggs, meat, fish, clams, potatoes, carrots, celery, spinaches, beets, chards, dark chocolates (at least 70 + % cacao), rice, millet, carob powder, walnuts, extra virgin oil, grape seed oil, thyme, sage, rosemary, coffee, green tea, and small amount of almonds. Almonds, despite containing fructose, still belong to the patient's diet, as they are well tolerated in small amount, suggested to be consumed together with a glucose source, typically rice or potatoes to activate GLUT2 transporter as remarked in Ref. (11). Any food, beverage, or herb not in the previous list and not according to treatment guidelines is excluded from the diet protocol. Particularly, processed food containing artificial sweeteners, high fructose corn syrup, sorbitol, glutamate, and aspartame must be excluded: among others soft drinks, fruit juices and the majority of confectionery (11). Food containing free fructose, such as honey and fruits, must be removed from patient's diet. Most legumes, wheat and most cereals, and many vegetables that contain fructans and inulins (15) must also be removed (11). Attention must also be paid to the excipients in pharmacological preparations, pills, syrups, and solutions (16). Compared with the previous patient's diet, the one proposed here does not affect the total daily energy intake (2,200-2,400 kcal/day), but the nutritional profile concerning a reduction in carbohydrates, fibers, and an increase in protein and fat intake. The patient's diet is thus composed of 31-36% carbohydrates, 30-32% fats, 25-27% proteins, and 9-10% fibers. The previous diet was mainly a Mediterranean diet. It was rich in vegetables, fresh fruits, dried fruit, cereals, and legumes. It contained a moderate amount of fish, meat, dairy products, eggs, nuts, and sweets. Its proportion of nutrients was: 55-56% carbohydrates, 30-32% fats, 17-18% proteins, and 16-18% fibers. therapeutic approach In order to assess her dietary intake, the patient was asked to keep a food diary. This method requires the subject to list the consumed food and the state of health, reporting the presence of symptoms: widespread pain, fatigue, morning stiffness, bowel function, headaches, sleep quality, cramps, prickling sensation at fingers and toes, mood changes, anxiety, and depressive mood among others. This method allows to evaluate compliance with the diet guidelines and the impact of diet modifications based on symptoms. It makes the patient an active subject to fight against the disease. This approach highly contributes to patient's motivation and compliance with protocol as it makes the patient conscious of her power on the control of symptoms. patient Clinical Response The growing severity of symptoms highly motivated the patient to strictly follow the diet guidelines. For a complete picture, the patient had already been on a lactose-free diet for 3 years and on a pork meat-free diet for 5 years. Dietary modifications resulted in a rapid improvement of the patient's condition after only a few days up to the full resolution of the majority of symptoms in few weeks. Symptoms of depression disappeared. Fatigue unrelieved when rest disappeared and she regained restorative sleep. Chronic musculoskeletal widespread pain and morning stiffness had a marked improvement up to no longer present. She recovered her energy and vitality. She got completely independent in all the activities of her daily life as before the onset of the disease by solving the occupational and social disabilities. The patient broke the dietary protocol: not admitted foods were arbitrarily, deliberately, and voluntarily assumed (for instance, among others: eating a pear, or a fig, or an onion, or asparagus). It plays as negative control. It is significant for three different reasons: to exclude a major placebo component in the remission of symptoms, to evaluate the short-term effectiveness of the treatment, and to validate the protocol as a final cure or a remission protocol. The recurrence of symptoms is correlated with diet faults. The treatment leads to a remission but it is not a final cure. Two months after the beginning of the diet the patient was vastly improved in every aspect. She regained her positive mental outlook. She returned to full employment. She recovered her energy and vitality as she did not since years. subsequent Course 12 months after the differential diagnosis and 10 months after the beginning of the diet modification the patient is still on diet. Marked not keeping occurred few times, being she well aware of the consequent recurrence of symptoms: when isolated, little faults trigger little symptoms; nevertheless, repeated and continuing faults have the potential for leading to the previous chronic condition of pain, fatigue, and mood symptoms. Moreover, being pain-free, the patient started physical aerobic exercise which she was unable to perform before due to stiffness and widespread musculoskeletal pain. Some comorbidities did not completely solve: sensitivity to cold, hypersensitivity to odors and noise, dysmenorrhea, and memory lapses are still present. FMs Burden Fibromyalgia syndrome is really a challenging, insidious, and disabling disease that afflicts patients and their relatives as a real burden in everyday life (10). Epidemiological data clearly demonstrate the socio-economical burden associated with FMS and the urgency of effective answers (4,10,(17)(18)(19). The diagnosis often delayed may exacerbate patient's conditions. Being doubted due to the invisibility of pain is perceived as a "double burden. " Unfortunately, it is a common condition among patients (9). Despite the large number of pharmacological and nonpharmacological clinical trials and studies performed, since nineties, an effective cure still lacks (1). The crucial role of 5-HT in FMS is no more a matter of debate. It has been clearly observed in experimental studies, although still not fully understood in its pathophysiological mechanism. Low levels of 5-HT and/or of its precursor Trp were variably observed in such studies, early during 1990s (20)(21)(22) and more recently (7,23,24). The introduction of selective serotonin re-uptake inhibitors (SSRIs) and SNRIs as a pharmacological therapy in FMS was the consequence in the clinical practice (3,6,(25)(26)(27)(28)(29). Besides Trp, low levels of other essential aminoacids (30,31) and altered aminoacid homeostasis (32) have been reported in patients with FMS as compared to the general population: anyway, these findings did not translate into an effective cure (1). Surprisingly, the "2016 Revisions to the 2010/2011 fibromyalgia diagnostic criteria" (33) did not contain any explicit reference to blood testing in this direction. the Novel Remission protocol Beyond the state of the art In this scenario, where the challenge for physicians and healthcare systems to face FMS is clear and still open (1), we report the first case of controlled remission in FMS following a novel metabolic approach (11). This report shows the crucial role of diet in FMS, and food choice as a key strategy for its management. The marked improvements of the patient's clinical condition open great perspectives to face up FMS burden giving the patients an effective strategy. Intrinsically, a withdrawal approach avoids the potential side-effects associated with pharmacological therapies [i.e., SSRIs and SNRIs (13,(34)(35)(36), muscle relaxants, and the interactions among them]. The effectiveness-to-cost ratio of this approach is evident. It is a low risk and accessible therapeutic approach with virtually no costs for the treatment itself, than those related to possible vitamin and mineral salt supplements, and blood testing to evaluate their levels. The economic perspective could be relevant bearing in mind the significant number of patients. Dietary modifications in FMS are not a new approach: different diets were attempted in the past, variably focused on the elimination of certain food or chemical additives (37). Nevertheless, the therapeutic approaches proposed till now often did not ground on a solid theory which is able to fully predict and explain the experimental outcomes. the possible Role of the placebo and Nocebo effects As in any therapeutic approach implemented for chronic pain, a significant placebo response should be considered. The placebo effect is reported in FMS (38,39). Breaking the diet protocol with not admitted food aims to exclude the remission of symptoms by a main placebo contribution. Although the placebo component could not be excluded at all a priori, the occurrence of an ad hoc nocebo effect precisely correlated with diet faults (i.e., voluntary breaks of protocol guidelines and accidental mistakes) is highly improbable. Diet Management and Implementation in the Clinical practice It is already known that nonimmunologically mediated adverse reactions to food, which resolved following dietary elimination, are then reproduced by food challenge (40). Clinical improvement was reported after dietary treatment for fructose malabsorption in irritable bowel syndrome (IBS) patients by different studies (41,42); particularly, a significant reduction of symptoms and improvements in the quality of life proportionate to the amount of eliminated fructose was reported by Choi et al. (43). The human capacity for fructose absorption is widely variable (44); incomplete fructose absorption can occur with doses as low as 5 g in individual considered as health subjects (45). Some authors report that patients with IBS associated with fructose malabsorption can tolerate 10-15 g of fructose per day (46). It is reasonable to suppose that a threshold exists in patients with FMS too, and that the tolerated amount of fructose and of the other not admitted molecules could be related to the severity of the patient's conditions. The threshold can be very low: the patient reports that even very low amounts of free fructose are able to trigger the symptoms. In severe conditions, a compromise could not be possible at all, and a complete fructosefree diet is the suggestion. A patient-to-patient tailored approach is the best implementation in the clinical practice. This report supports the protocol intrinsically effective for the remission of symptoms in FMS. It is a matter of fact that adherence to the protocol is correlated with symptomatic improvements and non-adherence with the recurrence of symptoms. Because of the abundance of fructose in our food supply (as it is present not only in the form of simple monosaccharide, but also in the form of fructose chains), a strict fructose-free and fructanfree diet is binding, and maybe not required once patients experience sufficient relief from their symptoms. Particularly, in not severe conditions, a "re-introduction phase" could be approached by introducing into the diet small amounts of not admitted food, one at a time, in order to determine exactly how much fructose and the other not admitted molecules can be tolerated, to have the least restrictive diet, while keeping symptoms under control. This way, partial compliance with protocol guidelines may be a personal compromise to control the symptoms to a satisfactory level, while minimizing the social limitations that dietary restrictions impose. The co-ingestion of glucose could be in principle beneficial to allow the presence of small amount of fructose in the diet (11). As previously mentioned, it activates the GLUT2 co-transporter, reducing the non-absorbed fructose (47,48). The difficulty of adhering to such restricted diet might be the criticism of it. Nevertheless, compliance with protocol guidelines demonstrates a high symptomatic improvement, being this way an incentive to be on diet. Diet modifications, that improved health condition in an IBS cohort of patients, reveal the willingness and the ability of patients to maintain dietary restrictions to avoid painful meal-related events (49). Patient's dietary education contributes to the compliance with diet too, so that proper training and active involvement in the way to remission are crucial for successful and durable results. CoNCLUsIoN This report shows a remarkable clinical improvement in FMS by a strict exclusion diet: remission of depression, pain, stiffness, chronic fatigue, and non-restorative sleep. This result opens new perspectives in the treatment of FMS solving the substantial functional limitations experienced by patients, virtually requiring no costs for the treatment itself. It could be an important point to be considered by healthcare systems, because of the incidence of FMS. The costs associated with the treatment are related to blood investigations and supplements of vitamins and mineral salts in case of deficiencies. Moreover, a dietary strategy has intrinsically no side-effects associated with pharmacological therapies. A preliminary patient's training should be considered, in order to gain proper knowledge of the protocol guidelines; it is important that patients can correlate the voluntary or chance failures to the recurrence of symptoms in order to avoid the gate of the "positive feedback loop. " Because of its efficacy on symptoms, the absence of drug sideeffects and its low cost, this approach could be an effective and accessible answer to the burden of FMS, at least to give patients a respite. A pilot study is required to ground this metabolic approach in FMS, and to finally evaluate its inclusion in the guidelines for clinical management. etHICs stateMeNt The subject described in this report has given written informed consent for publication of the above-mentioned data on her FMS case. aUtHoR CoNtRIBUtIoNs SML designed the report and wrote the final draft. FI followed the patient in her clinical history. SML and FI collected the patient's clinical data and approved the final draft for publication. aCKNoWLeDGMeNts The authors thank Laura Bianchi and Giuseppe Zanotti for reading the draft of the manuscript and making useful suggestions to improve it. Associated editor Dr. Kayo Masuko followed the review process, since the beginning. Despite leaving the review process after the endorsement by reviewers, the authors wanted to track her presence in the manuscript history, as her requirements improved the final manuscript.
2018-04-30T13:04:34.591Z
2018-04-30T00:00:00.000
{ "year": 2018, "sha1": "8285e29a8285c5390b52a76f2ade5fd489394c00", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmed.2018.00094/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8285e29a8285c5390b52a76f2ade5fd489394c00", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
199447888
pes2o/s2orc
v3-fos-license
Magnusiomyces capitatus fungemia: The value of direct microscopy in early diagnosis Two cases of fungemia caused by Magnusiomyces capitatus, an arthroconidial yeast-like fungus, in non-hematologic immunocompromised patients are described. Both patients died before definite diagnosis of M. capitatus was made. The report highlights that pending confirmation of the isolate by phenotypic and/or molecular methods, the characteristic morphologic features observed in Gram-stained smears of blood culture positive bottles can lead to early preliminary diagnosis, thus significantly reducing time required for initiating appropriate antifungal therapy. Introduction Magnusiomyces capitatus is an uncommon yeast that has undergone multiple taxonomic revisions in the last few decades [1] (De Hoog et al., 2004). It is an Ascomycetous yeast-like fungus and belongs to the order Saccharomycetales in family Dipodascaceae. In literature, it has been variedly described as Blastoschizomyces capitatus, Geotrichum capitatum, Trichosporon capitatum and Dispodascus capitatus [1] (de Hoog et al., 2004). M. capitatus can cause invasive infections among immunosuppressed patients especially those with hematologic disorders [2,3] (Mazzcato et al., 2015; Tanuskova et al., 2017). It can also infect non-neutropenic and immunocompetent patients [4,5] (Shah, 2017;D'Assumpcao et al., 2018). In immunocompromised patients, infections with M. capitatus are associated with increased risk of dissemination and high rates of mortality. In addition, Magnusiomyces species, along with other arthoconidial yeasts, are intrinsically resistant to echinocandins which are often used as a first line therapy for invasive candidiasis [6] (Kaplan et al., 2018). Here, we describe two cases of M. capitatus in non-hematologic immunocompromised patients, who died before definitive diagnosis was made. In this case report, we explore the important role of direct microscopy in rapid diagnosis of Magnusiomyces species, which has significant treatment implications. Cases Case 1. An 85-year-old woman with a long history of bronchial asthma, hypertension, ischemic heart disease, and chronic renal disease, was admitted (day 0) on May 2018 because of chest infection. Empirically, she was prescribed cefepime, linezolid, and oseltamivir (day +1). Two days following admission (day +2), she developed respiratory failure requiring intensive care with mechanical ventilation. Concomitantly, she also developed acute liver failure and severe renal impairment. Her total white blood cell count and neutrophils were raised: 24 x10/ 9 /L and 23 × 10 9 /L respectively. Because she remained critically ill, hydrocortisone (day +2) and Caspofungin (day +3) were started. As she had acute hepatic impairment, she received only 35mg maintenance dose of caspofungin. On day + 9, hemodialysis was initiated due to progressive renal failure, and cefepime was replaced with meropenem and colistin. On day +12, a new set of blood culture was collected, which yielded a yeast growth (day +15) by automated blood culture system (BD BACTEC FX). Gram-stained smears from the blood culture bottles showed numerous arthroconidia fragmenting into rectangular forms (Figs. 1 and 2). Subculture on Sabouraud dextrose agar (Oxoid, Basingstoke, UK) yielded whitish, dry, wrinkled yeast like colonies with radiating edges. The isolated yeast was identified as Saprochaete capitata by VITEK 2 with 99% confidence. However, before accurate identification and antifungal susceptibility of the yeast isolate could be determined, the patient succumbed to infection (day +15). Antifungal susceptibility data by Etest (bioMérieux) showed resistance to caspofungin (MIC ≥32 μg/ml), micafungin (MIC ≥32 μg/ml) and somewhat reduced susceptibility to fluconazole (MIC 3 μg/ml), however, the isolate appeared susceptible to voriconazole (0. Case 2. A 67-year-old woman with a history of diabetes, hypertension, ischemic heart disease, left ventricular failure, peripheral vascular disease, bronchial asthma and obstructive sleep apnea presented (day 0) with a decreased oral intake, and reduced level of consciousness. On examination, she was afebrile but hypotensive. CT scan of the head ruled out acute brain insult. Blood investigations revealed a high total white cell count of 18 × 10 9 /L, increased neutrophils 13 × 10 9 /L, normal procalcitonin (0.65 ng/mL) and raised serum creatinine (159 μmol/L) indicating acute kidney injury. The patient was in septic shock, so inotropic support was given and she was shifted to intensive care unit (day 0). A central line was inserted, and ceftriaxone and clarithromycin were started. Despite optimal supportive care, patient died next day (day +1). The blood cultures which were collected shortly after admission (day 0) grew a yeast, which was identified as Saprochaete capitata by VITEK2 (VITEK2, bioMérieux) and VITEK MS (confidence value 99%). The isolate was tested by Etest (bioMérieux) to determine antifungal susceptibility. It was resistant to caspofungin (MIC ≥32 μg/ml)) and fluconazole (MIC =16 μg/ml), but susceptible to amphotericin B (MIC = 0.5 μg/ml) and voriconazole (MIC = 0.5 μg/ ml). By doing PCR amplification followed by DNA sequencing of the ITS region of rDNA, the identity of the isolate as M. capitatus was confirmed. Discussion This report is noteworthy in that it conveys three important messages, firstly, M. capitatus fungemia occurred in non-neutropenic and non-hematologic patients, secondly, initial diagnosis was made by characteristic morphological feature of the yeast in blood cultures, and thirdly, it emphasizes the need of prior identification and susceptibility testing since arthroconidial yeast-like fungi are intrinsically resistant to echinocandins. M. capitatus (anamorph: Saprochaete capitate) is an emerging yeast pathogen associated with considerable mortality in immunocompromised patients [2,3,[9][10][11] A study conducted by Kaplan et al., which included 21 M. capitatus isolates, revealed that they were resistant to fluconazole and micafungin, but highly susceptible to voriconazole [6]. These findings are consistent with susceptibility results of our two isolates and also with some other reports [9,10,12,14] [2] reviewed 104 cases of S. capitate infection reported between 1977 and 2013. The most common risk factor for M. capitatus infection was prolonged neutropenia and majority of them (82%) had hematologic malignancies. Around 75% of the cases were diagnosed by blood cultures, while in the remaining cases (25%), the organism was isolated from other sterile sites, such as CSF, peritoneal fluid or tissue biopsies. Interestingly, 43% of the cases had more than one site involved, including brain, lung, liver, spleen, kidney, gut, bone, and/or bone marrow [2] (Mazzacato et al., 2015). The outcome depended upon the immune status and degree of neutropenia of the host. In patients with profound neutropenia, mortality may exceed 90% [15] (Bouza et al., 2014). Blood culture is currently the main method for diagnosing patients with fungemia or candidemia. It has the advantage of isolating the etiologic agent to be identified at species level and also to perform susceptibility testing. However, it has a long turn-around time. On average, it takes around 24-48 hours from positive blood culture to identify the species. To shorten this time, the role of direct microscopy using Gram stain has been re-examined in several publications. Harrington et al. [16] have examined the use of yeast morphology by Gram-stained smears in differentiating Candida albicans from other yeasts. They have found that the presence of clustered pseudohyphae had a sensitivity, specificity, positive predictive value, and negative predictive value of 85, 97, 96, and 89%, respectively. Likewise, Meretuk & Hamprecht [17] have also assessed the usefulness of microscopic morphologic features of common Candida spp. from positive blood culture in identifying the species. Features such as pseudohypal clusters, the degree of branching, cylindrical and oval blastospores were used to form an algorithm for species identification. The authors reported that 92% of the tested Candida isolates were correctly identified using this algorithm. It is well known that a delay in institution of antifungal therapy even by few hours increases mortality several folds in candidemia cases and the same may also apply to Magnusionmyces fungemia [18] (Garey et al., 2006). To the best of our knowledge, the value of direct microscopy in diagnosing Magnusionmyces bloodstream infection has not been highlighted in previous studies. The characteristic microscopic features characterized by arthroconidial forms offers a distinct advantage in achieving rapid diagnosis and may help clinicians to choose appropriate antifungal therapy and avoid echinocandins, to which M. capitatus and other closely related arthroconidial yeast species are intrinsically resistant (Schuemans et al., 2011; Arendrup et al., 2014) [12,19]. The other arthroconidial yeasts include Saprochaete clavata, Geotrichum candidum, and Trichosporon spp. which all have high MICs against echinocandin. Currently, no clinical breakpoints or therapeutic guidelines are available for treating M. capitatus infection. Based on antifungal susceptibility profiles and limited clinical experience, amphotericin B with, or without flucytosine could be recommended (Arendrup et al., 2014) [19]. It is pertinent to emphasize here that with increasing use of echinocandins in clinical practice, the frequency of infections caused by arthroconidial yeast-like filamentous fungi is likely to increase, warranting greater understanding of their epidemiology, virulence attributes and management strategies. In conclusion, this report underscores the value of Gram-stained smear from positive blood cultures in the early presumptive diagnosis of M. capitatus fungemia. Prompt initiation of appropriate antifungal therapy, while avoiding echinocandin usage, is crucial to improve therapeutic outcome of patients with fungemia caused by arthroconidial yeast-like fungi. Conflict of interest There are none.
2019-08-08T13:13:51.333Z
2019-07-30T00:00:00.000
{ "year": 2019, "sha1": "5bc88dfb5c9310b5ff162bef79a0d9fdf60d4c74", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.mmcr.2019.07.009", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3da7f7a5d71d53235b7adf7c1b9b19ee8a862376", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
11747895
pes2o/s2orc
v3-fos-license
Dynamic Connectivity: Connecting to Networks and Geometry Dynamic connectivity is a well-studied problem, but so far the most compelling progress has been confined to the edge-update model: maintain an understanding of connectivity in an undirected graph, subject to edge insertions and deletions. In this paper, we study two more challenging, yet equally fundamental problems. Subgraph connectivity asks to maintain an understanding of connectivity under vertex updates: updates can turn vertices on and off, and queries refer to the subgraph induced by"on"vertices. (For instance, this is closer to applications in networks of routers, where node faults may occur.) We describe a data structure supporting vertex updates in O (m^{2/3}) amortized time, where m denotes the number of edges in the graph. This greatly improves over the previous result [Chan, STOC'02], which required fast matrix multiplication and had an update time of O(m^0.94). The new data structure is also simpler. Geometric connectivity asks to maintain a dynamic set of n geometric objects, and query connectivity in their intersection graph. (For instance, the intersection graph of balls describes connectivity in a network of sensors with bounded transmission radius.) Previously, nontrivial fully dynamic results were known only for special cases like axis-parallel line segments and rectangles. We provide similarly improved update times, O (n^{2/3}), for these special cases. Moreover, we show how to obtain sublinear update bounds for virtually all families of geometric objects which allow sublinear-time range queries, such as arbitrary 2D line segments, d-dimensional simplices, and d-dimensional balls. But what exactly makes a graph "dynamic"? Computer networks have long provided the common motivation. The dynamic nature of such networks is captured by two basic types of updates to the graph: • edge updates: adding or removing an edge. These correspond to setting up a new cable connection, accidental cable cuts, etc. • vertex updates: turning a vertex on and off. Vertices (routers) can temporarily become "off" after events such as a misconfiguration, a software crash and reboot, etc. Problems involving only vertex updates have been called dynamic subgraph problems, since queries refer to the subgraph induced by vertices which are on. Loosely speaking, dynamic graph problems fall into two categories. For "hard" problems, such as shortest paths and directed reachability, the best known running times are at least linear in the number of vertices. These high running times obscure the difference between vertex and edge updates, and identical bounds are often stated [9,32,33] for both operations. For the remainder of the problems, sublinear running times are known for edge updates, but sublinear bounds for vertex updates seems much harder to get. For instance, even iterating through all edges incident to a vertex may take linear time in the worst case. That vertex updates are slow is unfortunate. Referring to the computer-network metaphor, vertex updates are cheap "soft" events (misconfiguration or reboot), which occur more frequently than the costly physical events (cable cut) that cause an edge update. Subgraph connectivity. As mentioned, most previous sublinear dynamic graph algorithms address edge updates but not the equally fundamental vertex updates. One notable exception, however, was a result of Chan [6] from STOC'02 on the basic connectivity problem for general sparse (undirected) graphs. This algorithm can support vertex updates in time 1 O(m 0.94 ) and decide whether two query vertices are connected in time O(m 1/3 ). Though an encouraging start, the nature of this result makes it appear more like a half breakthrough. For one, the update time is only slightly sublinear. Worse yet, Chan's algorithm requires fast matrix multiplication (FMM). The O(m 0.94 ) update time follows from the theoretical FMM algorithm of Coppersmith and Winograd [8]. If Strassen's algorithm is used instead, the update time becomes O(m 0.984 ). Even if optimistically FMM could be done in quadratic time, the update time would only improve to O(m 0.89 ). FMM has been used before in various dynamic graph algorithms (e.g., [10,26]), and the paper [6] noted specific connections to some matrix-multiplication-related problems (see Section 2). All this naturally led one to suspect, as conjectured in the paper, that FMM might be essential to our problem. Thus, the result we are about to describe may come as a bit of a surprise. . . 1 We use m and n to denote the number of edges and vertices of the graph respectively; e O(·) ignores polylogarithmic factors and O * (·) hides n ε factors for an arbitrarily small constant ε > 0. Update bounds in this paper are, by default, amortized. First of all, this is a significant quantitative improvement (to anyone who regards an m 0.27 factor as substantial), and it represents the first convincingly sublinear running time. More importantly, it is a significant qualitative improvement, as our bound does not require FMM. Our algorithm involves a number of ideas, some of which can be traced back to earlier algorithms, but we use known edge-updatable connectivity structures to maintain a more cleverly designed intermediate graph. The end product is not straightforward at all, but still turns out to be simpler than the previous method [6] and has a compact, two-page description (we regard this as another plus, not a drawback). Dynamic Geometry We next turn to another important class of dynamic connectivity problems-those arising from geometry. Geometric connectivity. Consider the following question, illustrated in Figure 1(a). Maintain a set of line segments in the plane, under insertions and deletions, to answer queries of the form: "given two points a and b, is there a path between a and b along the segments?" This simple-sounding problem turns out to be a challenge. On one hand, understanding any local geometry does not seem to help, because the connecting path can be long and windy. On the other hand, the graph-theoretic understanding is based on the intersection graph, which is too expensive to maintain. A newly inserted (or deleted) segment can intersect a large number of objects in the set, changing the intersection graph dramatically. Abstracting away, we can consider a broad class of problems of the form: maintain a set of n geometric objects, and answer connectivity queries in their intersection graph. Such graphs arise, for instance, in VLSI applications in the case of orthogonal segments, or gear transmission systems, in the case of touching disks; see Figure 1(b). A more compelling application can be found in sensor networks: if r is the radius within which two sensors can communicate, the communication network is the intersection graph of balls of radius r/2 centered at the sensors. While our focus is on theoretical understanding rather than the practicality of specific applications, these examples still indicate the natural appeal of geometric connectivity problems. All these problems have a trivial O(n) solution, by maintaining the intersection graph through edge updates. A systematic approach to beating the linear time bound was proposed in Chan's paper as well [6], by drawing a connection to subgraph connectivity. Assume that a particular object type allows data struc-tures for intersection range searching with space S(n) and query time T (n). It was shown that geometric connectivity can essentially be solved by maintaining a graph of size m = O(S(n) + nT (n)) and running O(S(n)/n + T (n)) vertex updates for every object insertion or deletion. Using the previous subgraph connectivity result [6], an update in the geometric connectivity problem took time O([S(n)/n + T (n)] · [S(n) + nT (n)] 0.94 ). Using our improved result, the bound becomes O([S(n)/n + T (n)] · [S(n) + nT (n)] 2/3 ). The prime implication in the previous paper is that connectivity of axis-parallel boxes in any constant dimension (in particular, orthogonal line segments in the plane) reduces to subgraph connectivity, with a polylogarithmic cost. Indeed, for such boxes range trees yield S(n) = n · lg O(d) n and T (n) = lg O(d) n. Unfortunately, while nontrivial range searching results are known for many types of objects, very efficient range searching is hard to come by. Consider our main motivating examples: • for arbitrary (non-orthogonal) line segments in IR 2 , one can achieve • for disks in IR 2 , one can achieve T (n) = O * (n 2/3 ) and S(n) = O * (n), or T (n) = O * (n 1/2 ) and Even with our improved vertex-update time, the [S(n)/n + T (n)] · [S(n) + nT (n)] 2/3 bound is too weak to beat the trivial linear update time. For arbitrary line segments in IR 2 , one would need to improve the vertex-update time to m 1/2−ε , which appears unlikely without FMM (see Section 2). The line segment case was in fact mentioned as a major open problem, implicitly in [6] and explicitly in [1]. The situation gets worse for objects of higher complexity or in higher dimensions. Our results. In this paper, we are finally able to break the above barrier for dynamic geometric connectivity. At a high level, we show that range searching with any sublinear query time is enough to obtain sublinear update time in geometric connectivity. In particular, we get the first nontrivial update times for arbitrary line segments in the plane, disks of arbitrary radii, and simplices and balls in any fixed dimension. While the previous reduction [6] involves merely a straightforward usage of "biclique covers", our result here requires much more work. For starters, we need to devise a "degree-sensitive" version of our improved subgraph connectivity algorithm (which is of interest in itself); we then use this and known connectivity structures to maintain not one but two carefully designed intermediate graphs. Known range searching techniques [2] from computational geometry almost always provide sublinear query time. For instance, Matoušek [28] showed that b ≈ 1/2 is attainable for line segments, triangles, and any constant-size polygons in IR 2 ; more generally, b ≈ 1/d for simplices or constant-size polyhedra in IR d . Further results by Agarwal and Matoušek [3] yield b ≈ 1/(d + 1) for balls in IR d . Most generally, b > 0 is possible for any class of objects defined by semialgebraic sets of constant description complexity. More results. Our general sublinear results undoubtedly invite further research into finding better bounds for specific classes of objects. In general, the complexity of range queries provides a natural barrier for the update time, since upon inserting an object we at least need to determine if it intersects any object already in the set. Essentially, our result has a quadratic loss compared to range queries: if T (n) = n 1−b , the update time is n 1−Θ(b 2 ) . In Section 5, We make a positive step towards closing this quadratic gap: we show that if the updates are given offline (i.e. are known in advance), the amortized update time can be made n 1−Θ(b) . We need FMM this time, but the usage of FMM here is more intricate (and interesting) than typical. For one, it is crucial to use fast rectangular matrix multiplication. Along the way, we even find ourselves rederiving Yuster and Zwick's sparse matrix multiplication result [38] in a more general form. The juggling of parameters is also more unusual, as one can suspect from looking at our actual update bound, which is O(n 1+α−bα 1+α−bα/2 ), where α = 0.294 is an exponent associated with rectangular FMM. Related Work Before proceeding to our new algorithms, we mention more related work, for the sake of completeness. Graphs. Most previous work on dynamic subgraph connectivity concerns special cases only. Frigioni and Italiano [14] considered vertex updates in planar graphs, and described a polylogarithmic solution. If vertices have constant degree, vertex updates are equivalent to edge updates. For edge updates, Henzinger and King [17] were first to obtain polylogarithmic update times (randomized). This was improved by Holm et al. [20] to a deterministic solution with O(lg 2 m) time per update, and by Thorup [34] to a randomized solution with O(lg m · (lg lg m) 3 ) update time. The randomized bound almost matches the Ω(lg m) lower bound from [30]. All these data structures maintain a spanning forest as a certificate for connectivity. This idea fails for vertex updates in the general case, since the certificate can change substantially after just one update. In many practical settings, these planar-graph and constant-degree special cases are unfortunately inadequate. In particular, large networks of routers are often designed as overlay graphs over a (small-degree) geographic graph. Long fiber-optic links bypass intermediate nodes, in order to minimize the latency cost of passing through the electric domain repeatedly. For more difficult dynamic graph problems, the goal is typically changed from getting polylogarithmic bounds to finding better exponents in polynomial bounds; for example, see all the papers on directed reachability [10,25,32,33]. Evidence suggests that dynamic subgraph connectivity fits this category. It was observed [6] that finding triangles (3-cycles) or quadrilaterals (4-cycles) in directed graphs can be reduced to O(m) vertex updates. Thus, an update bound better than √ m appears unlikely without FMM, since the best running time for finding triangles without FMM is O(m 3/2 ), dating back to STOC'77 [24]. Even with FMM, known results are only slightly better: finding triangles and quadrilaterals takes time O(m 1.41 ) [5] and O(m 1.48 ) [37] respectively. Thus, current knowledge prevents an update bound better than m 0.48 . Geometry. It was shown [6] that subgraph connectivity can be reduced to dynamic connectivity of axisparallel line segments in 3 dimensions. Thus, as soon as one gets enough combinatorial richness in the host geometric space, subgraph connectivity becomes the only possible way to solve geometric connectivity. When the geometry is less combinatorially rich, it is possible to find ad hoc algorithms that do not rely on subgraph connectivity. Special cases that have been investigated include the following: • for orthogonal segments or axis-parallel rectangles in the plane, Afshani and Chan [1] proposed a data structure with update time O(n 10/11 ) and constant query time. This is incomparable to our result of update time O(n 2/3 ) and query time O(n 1/3 ). • for unit axis-parallel hypercubes, the problem reduces to maintaining the minimum spanning tree under the ℓ ∞ metric. Eppstein [11] describes a general technique for dynamic geometric MST, ultimately appealing to range searching, and obtains polylogarithmic time per operation. • for unit balls, the problem reduces to dynamic Euclidean MST, which in turn reduces to range searching by Eppstein's technique [11]. In two dimensions, Chan's dynamic nearest-neighbor data structure [7] implies an O(lg 10 n) update time for this problem. Dynamic geometric connectivity is a natural continuation of static geometric connectivity problems, which have been studied since the early 1980s. As in our case, the main challenge is to avoid working explicitly with the intersection graph, which could be of quadratic size. Known results include O(n lg n)time algorithms [22,23] for computing the connected components of axis-aligned rectangles in the plane, and O(n 4/3 )-time algorithms [16,27] for arbitrary line segments in the plane. More generally, Chan [6] (and later Eppstein [12]) noted the connection of static geometric connectivity to range searching, which implied subquadratic algorithms for objects with constant description complexity. The connection carries over to the incremental (insertion-only) and decremental (deletion-only) cases [6], e.g., yielding O(n 1/3 ) update time for arbitrary line segments, reproving and extending some older results [4]. Another related problem is maintaining connectivity in the kinetic setting, where objects move continuously according to known flight plans. See [18,19] for the case of axis-parallel boxes, and [15] for unit disks. Dynamic Subgraph Connectivity with O(m 2/) Update Time In this section, we present our new method for the dynamic subgraph connectivity problem: maintaining a subset S of vertices in a graph G, under vertex insertions and deletions in S, so that we can decide whether any two query vertices are connected in the subgraph induced by S. We will call the vertices in S the active vertices. For now, we assume that the graph G itself is static. The complete description of the new method is given in the proof of the following theorem. It is "short and sweet", especially if the reader compares with Chan's paper [6]. The previous method requires several stages of development, addressing the offline and semi-online special cases, along with the use of FMMwe completely bypass these intermediate stages, and FMM, here. Embedded below, one can find a number of different ideas (some also used in [6]): rebuilding periodically after a certain number of updates, distinguishing "high-degree" features from "low-degree" features (e.g., see [5,37]), amortizing by splitting smaller subsets from larger ones, etc. The key lies in the definition of a new, yet deceptively simple, intermediate graph G * , which is maintained by known polylogarithmic data structures for dynamic connectivity under edge updates [17,20,34]. Except for these known connectivity structures, the description is entirely self-contained. Proof. We divide the update sequence into phases, each consisting of q := m/∆ updates. The active vertices are partitioned into two sets P and Q, where P undergoes only deletions and Q undergoes both insertions and deletions. Each vertex insertion is done to Q. At the end of each phase, we move the elements of Q to P and reset Q to the empty set. This way, |Q| is kept at most q at all times. Call a connected component in (the subgraph induced by) P high if the sum of the degrees of its vertices exceeds ∆, and low otherwise. Clearly, there are at most O(m/∆) high components. The data structure. • We store the components of P in a data structure for decremental (deletion-only) connectivity that supports edge deletions in polylogarithmic amortized time. • We maintain a bipartite multigraph Γ between V and the components γ in P : for each uv ∈ E where v lies in component γ, we create a copy of an edge uγ ∈ Γ. • For each vertex pair u,v, we maintain the value C[u, v] defined as the number of low components in P that are adjacent to both u and v in Γ. (Actually, only O(m∆) entries of C[·, ·] are nonzero and need to be stored.) • We define a graph G * whose vertices are the vertices of Q and components of P : We maintain G * in another data structure for dynamic connectivity supporting polylogarithmic-time edge updates. Justification. We claim that two vertices of Q are connected in the subgraph induced by the active vertices in G iff they are connected in G * . The "if" direction is obvious. For the "only if" direction, suppose two vertices u, v ∈ Q are "directly" connected in G by being adjacent to a common component γ in P . If γ is high, then edges of type (b) ensure that u and v are connected in G * . If instead γ is low, then edges of type (a) ensure that u and v are connected in G * . By concatenation, the argument extends to show that any two vertices u, v ∈ Q connected by a path in G are connected in G * . Queries. Given two vertices v 1 and v 2 , if both are in Q, we can simply test whether they are connected in G * . If instead v j (j ∈ {1, 2}) is in a high component γ j , then we can replace v j with any vertex of Q adjacent to γ j in G * . If no such vertex exists, then because of type-(b) edges, γ j is an isolated component and we can simply test whether v 1 and v 2 are both in the same component of P . If on the other hand v j is in a low component γ j , then we can exhaustively search for a vertex in Q adjacent to γ j in Γ, in O(∆) time, and replace v j with such a vertex. Again if no such vertex exists, then γ j is an isolated component and the test is easy. The query cost is O(∆). Deletion of a vertex from a high component γ in P . The component γ is split into a number of subcomponents γ 1 , . . . , γ ℓ with, say, γ 1 being the largest. We can update the multigraph Γ in time O(deg(γ 2 ) + · · · + deg(γ ℓ )) by splitting the smaller subcomponents from the largest subcomponent. Consequently, we need to update O(deg(γ 2 ) + · · · + deg(γ ℓ )) edges of type (b) in G * . Since P undergoes deletions only, a vertex can belong to the smaller subcomponents in at most O(lg n) splits over the entire phase, and so the total cost per phase is O(m), which is absorbed in the preprocessing cost of the phase. For each low subcomponent γ j , we update the matrix C[·, ·] in O(deg(γ j )∆) time, by examining each edge γ j v ∈ Γ and each of the O(∆) vertices u adjacent to γ j and testing whether γ j u ∈ Γ. Consequently, we need to update O(deg(γ j )∆) edges of type (a) in G * . Since a vertex can change from being in a high component to a low component at most once over the entire phase, the total cost per phase is O(m∆), which is absorbed by the preprocessing cost. Finale. The overall amortized cost per update operation is Note that edge insertions and deletions in G can be accomodated easily (e.g., see Lemma 2 of the next section). Dynamic Geometric Connectivity with Sublinear Update Time In this section, we investigate geometric connectivity problems: maintaining a set S of n objects, under insertions and deletions of objects, so that we can decide whether two query objects are connected in the intersection graph of S. (In particular, we can decide whether two query points are connected in the union of S by finding two objects containing the two points, via range searching, and testing connectedness for these two objects.) By the biclique-cover technique from [6], the result from the previous section immediately implies a dynamic connectivity method for axis-parallel boxes with O(n 2/3 ) update time and O(n 1/3 ) query time in any fixed dimension. Unfortunately, this technique is not strong enough to lead to sublinear results for other objects, as we have explained in the introduction. This is because (i) the size of the maintained graph, m = O(S(n) + nT (n)), may be too large and (ii) the number of vertex updates triggered by an object update, O(S(n)/n + T (n)), may be too large. We can overcome the first obstacle by using a different strategy that rebuilds the graph more often to keep it sparse; this is not obvious and will be described precisely later during the proof of Theorem 5. The second obstacle is even more critical: here, the key is to observe that although each geometric update requires multiple vertex updates, many of these vertex updates involves vertices of low degrees. A degree-sensitive version of subgraph connectivity The first ingredient we need is a dynamic subgraph connectivity method that works faster when the degree of the updated vertex is small. Fortunately, we can prove the following lemma, which extends Theorem 1 (if we set ∆ = n 1/3 ). The method follows that of Theorem 1, but with an extra twist: not only do we classify components of P as high or low, but we also classify vertices of Q as high or low. Proof. The data structure is the same as in the proof of Theorem 1, except for one difference: the definition of the graph G * . Call a vertex high if its degree exceeds m/∆, and low otherwise. Clearly, there are at most O(∆) high vertices. • We define a graph G * whose vertices are the vertices of Q and components of P : (c) For each u, v ∈ Q, if uv ∈ E, then create an edge uv ∈ G * . We maintain G * in a data structure for dynamic connectivity with polylogarithmic-time edge updates. Justification. We claim that two vertices of Q are connected in the subgraph induced by the active vertices in G iff they are connected in G * . The "if" direction is obvious. For the "only if" direction, suppose two vertices u, v ∈ Q are "directly" connected in G by being adjacent to a common component γ in P . If γ is high, then edges of type (b) ensure that u and v are connected in G * . If u and v are both low, then edges of type (b ′ ) ensure that u and v are connected in G * . In the remaining case, at least one of the two vertices, say, u is high, and γ is low; here, edges of type (a ′ ) ensure that u and v are again connected in G * . The claim follows by concatenation. Queries. Given two vertices v 1 and v 2 , if both are in Q, we can simply test whether they are connected in G * . If instead v j (j ∈ {1, 2}) is in a component γ j , then we can replace v j with any vertex of Q adjacent to γ j in G * . If no such vertex exists, then because of type-(b ′ ) edges, γ j can only be adjacent to high vertices of Q. We can exhaustively search for a high vertex in Q adjacent to γ j in Γ, in O(∆) time, and replace v j with such a vertex. If no such vertex exists, then γ j is an isolated component and we can simply test whether v 1 and v 2 are both in γ j . The cost is O(∆). Preprocessing per phase. At the beginning of each phase, the cost to preprocess the data structure is O(m∆) as before. We can charge every update operation with an amortized cost of O(m∆/q) = O(∆ 2 ). Edge updates. We can simulate the insertion of an edge uv by inserting a new low vertex z adjacent to only u and v to Q. Since the degree is 2, the cost is O(1). We can later simulate the deletion of this edge by deleting the vertex z from Q. Range searching tools from geometry Next, we need known range searching techniques. These techniques give linear-space data structures (S(n) = O(n)) that can retrieve all objects intersecting a query object in sublinear time (T (n) = O(n 1−b )) for many types of geometric objects. We assume that our class of geometric objects satisfies the following property for some constant b > 0-this property neatly summarizes all we need to know from geometry. The property is typically proved by applying a suitable "partition theorem" in a recursive manner, thereby forming a so-called "partition tree"; for example, see the work by Matoušek [28] or the survey by Agarwal and Erickson [2]. Each canonical subset corresponds to a node of the partition tree (more precisely, the subset of all objects stored at the leaves underneath the node). Matoušek's results imply that b = 1/d − ε is attainable for simplices or constant-size polyhedra in IR d . (To go from simplex range searching to intersection searching, one uses multi-level partition trees; e.g., see [29].) Further results by Agarwal and Matoušek [3] yield b = 1/(d + 1) − ε for balls in IR d and nontrivial values of b for other families of curved objects (semialgebraic sets of constant degree). The special case of axis-parallel boxes corresponds to b = 1. The specific bounds in (i) and (ii) may not be too well known, but they follow from the hierarchical way in which canonical subsets are constructed. For example, (ii) follows since the subsets in C z of size at most n/∆ are contained in O(∆ 1−b ) subsets of size O(n/∆). In fact, (multi-level) partition trees guarantee a stronger inequality, , from which both (i) and (ii) can be obtained after a moment's thought. As an illustration, we can use the above property to develop a data structure for a special case of dynamic geometric connectivity where insertions are done in "blocks" but arbitrary deletions are to be supported. Although the insertion time is at least linear, the result is good if the block size s is sufficiently large. This subroutine will make up a part of the final solution. Lemma 4. We can maintain the connected components among a set S of objects in a data structure that supports insertion of a block of s objects in O(n + sn 1−b ) amortized time (s < n), and deletion of a single object in O(1) amortized time. Proof. We maintain a multigraph H in a data structure for dynamic connectivity with polylogarithmic edge update time (which explicitly maintains the connected components), where the vertices are the objects of S. This multigraph will obey the invariant that two objects are geometrically connected iff they are connected in S. We do not insist that H has linear size. Insertion of a block B to S. We first form a collection C of canonical subsets for S ∪ B by Property 3. For each z ∈ B and each C ∈ C z , we assign z to C. For each canonical subset C ∈ C, if C is assigned at least one object of B, then we create new edges in H linking all objects of C and all objects assigned to C in a path. (If this path overlaps with previous paths, we create multiple copies of edges.) The number of edges inserted is thus O(n + |B|n 1−b ). Justification. The invariant is satisfied since all objects in a canonical subset C intersect all objects assigned to C, and are thus all connected if there is at least one object assigned to C. Deletion of an object z from S. For each canonical subset C containing or assigned the object z, we need to delete at most 2 edges and insert 1 edge to maintain the path. As soon as the path contains no object assigned to C, we delete all the edges in the path. Since the length of the path can only decrease over the entire update sequence, the total number of such edge updates is proportional to the initial length of the path. We can charge the cost to edge insertions. Putting it together We are finally ready to present our sublinear result for dynamic geometric connectivity. We again need the idea of rebuilding periodically, and splitting smaller sets from larger ones. In addition to the graph H (of superlinear size) from Lemma 4, which undergoes insertions only in blocks, the key lies in the definition of another subtly crafted intermediate graph G (of linear size), maintained this time by the subgraph connectivity structure of Lemma 2. The definition of this graph involves multiple types of vertices and edges. The details of the analysis and the setting of parameters get more interesting. Proof. We divide the update sequence into phases, each consisting of y := n b updates. The current objects are partitioned into two sets X and Y , where X undergoes only deletions and Y undergoes both insertions and deletions. Each insertion is done to Y . At the end of each phase, we move the elements of Y to X and reset Y to the empty set. This way, |Y | is kept at most y at all times. At the beginning of each phase, we form a collection C of canonical subsets for X by Property 3. The data structure. • We maintain the components of X in the data structure from Lemma 4. • We maintain the following graph G for dynamic subgraph connectivity, where the vertices are objects of X ∪ Y , components of X, and the canonical subsets of the current phase: (a) Create an edge in G between each component of X and each of its objects. (b) Create an edge in G between each canonical subset and each of its objects in X. (c) Create an edge in G between each object z ∈ Y and each canonical subset C ∈ C z . Here, we assign z to C. (d) Create an edge in G between every two intersecting objects in Y . (e) We make a canonical subset active in G iff it is assigned at least one object in Y . Vertices that are objects or components are always active. Justification. We claim that two objects are geometrically connected in X ∪ Y iff they are connected in the subgraph induced by the active vertices in the graph G. The "only if" direction is obvious. For the "if" direction, we note that all objects in an active canonical subset C intersect all objects assigned to C and are thus all connected. Queries. We answer a query by querying in the graph G. The cost is O(∆). Preprocessing per phase. Before a new phase begins, we need to update the components in X as we move all elements of Y to X (a block insertion). By Lemma 4, the cost is O(n + yn Deletion of an object z in X. We first update the components of X. By Lemma 4, the amortized cost is O(1). We can now update the edges of type (a) in G. The total number of such edge updates per phase is O(n lg n), by always splitting smaller components from larger ones. The amortized number of edge updates is thus O(n/y). The amortized cost is O((n/y)∆ 2 ) = O(n 1−b ∆ 2 ). Finale. The overall amortized cost per update operation is Note that we can still prove the theorem for b > 1/2, by handling the O(y 2 ) intersections among Y (the type (d) edges) in a less naive way. However, we are not aware of any specific applications with b ∈ (1/2, 1). Offline Dynamic Geometric Connectivity For the special case of offline updates, we can improve the result of Section 4 for small values of b by a different method using rectangular matrix multiplication. Let M [n 1 , n 2 , n 3 ] represent the cost of multiplying a Boolean n 1 × n 2 matrix A with a Boolean n 2 × n 3 matrix B. Let M [n 1 , n 2 , n 3 | m 1 , m 2 ] represent the same cost under the knowledge that the number of 1's in A is m 1 and the number of 1's in B is m 2 . We can reinterpret this task in graph terms: Suppose we are given a tripartite graph with vertex classes V 1 , V 2 , V 3 of sizes n 1 , n 2 , n 3 respectively where there are m 1 edges between V 1 and V 2 and m 2 edges between V 2 and V 3 . Then M [n 1 , n 2 , n 3 | m 1 , m 2 ] represent the cost of deciding, for each u ∈ V 1 and v ∈ V 3 , whether u and v are adjacent to a common vertex in V 2 . An offline degree-sensitive version of subgraph connectivity We begin with an offline variant of Lemma 2: Proof. We divide the update sequence into phases, each consisting of q low-vertex updates. The active vertices are partitioned into two sets P and Q, with Q ⊆ Q 0 , where P and Q 0 are static and Q undergoes both insertions and deletions. Each vertex insertion/deletion is done to Q. At the end of each phase, we reset Q 0 to hold all O(∆) high vertices plus the low vertices involved in the updates of the next phase, reset P to hold all active vertices not in Q 0 , and reset Q to hold all active vertices in Q 0 . Clearly, |Q| ≤ |Q 0 | = O(q). The data structure is the same as the one in the proof of Lemma 2, with one key difference: we only maintain the value C[u, v] when u is a high vertex in Q 0 and v is a (high or low) vertex in Q 0 . Moreover, we do not need to distinguish between high and low components, i.e., all components are considered low. During preprocessing of each phase, we can now compute C Deletions in P do not occur now. Sparse and dense rectangular matrix multiplication Sparse matrix multiplication can be reduced to multiplying smaller dense matrices, by using a "highlow" trick [5]. Fact 7(i) below can be viewed as a variant of [6, Lemma 3.1] and a result of Yuster and Zwick [38]-incidentally, this fact is sufficiently powerful to yield a simple(r) proof of Yuster and Zwick's sparse matrix multiplication result, when combined with known bounds on dense rectangular matrix multiplication. Fact 7(ii) below states one known bound on dense rectangular matrix multiplication which we will use. Putting it together We now present our offline result for dynamic geometric connectivity using Lemma 6. Although we also use Property 3, the design of the key graph G is quite different from the one in the proof of Theorem 5. For instance, the size of the graph is larger (and no longer O(n)), but the number of edges incident to high vertices remains linear; furthermore, each object update triggers only a constant number of vertex updates in the graph. All the details come together in the analysis to lead to some intriguing choices of parameters. Proof. We divide the update sequence into phases, each consisting of q updates, where q is a parameter satisfying ∆ ≤ q ≤ n/∆ 1−b . The current objects are partitioned into two sets X and Y , with Y ⊆ Y 0 where X and Y 0 are static and Y undergoes both insertions and deletions. Each insertion/deletion is done to Y . At the end of each phase, we reset Y 0 to hold all objects involved the objects of the next phase, X to hold all current objects not in Y 0 , and Y to hold all current objects in Y 0 . Clearly, |Y | ≤ |Y 0 | = O(q). At the beginning of each phase, we form a collection C of canonical subsets for X ∪ Y 0 by Property 3. The data structure. • We maintain the components of X in the data structure from Lemma 4. • We maintain the following graph G for offline dynamic subgraph connectivity, where the vertices are objects of X ∪ Y 0 , components of X, and canonical subsets of size exceeding n/∆: (a) Create an edge in G between each component of X and each of its objects. (b) Create an edge in G between each canonical subset C of size exceeding n/∆ and each of its objects in X ∪ Y . (c) Create an edge in G between each object z ∈ Y 0 and each canonical subset C ∈ C z of size exceeding n/∆. Here, we assign z to C. (d) Create an edge in G between each object z ∈ Y 0 and each object in the union of the canonical subsets in C z of size at most n/∆. (e) We make a canonical subset active in G iff it is assigned at least one object in Y . We make the vertices in X ∪Y active, and all components active. The high vertices are precisely the canonical subsets of size exceeding n/∆; there are O(∆) such vertices. Update of an object z in Y . We need to make a single vertex update z in G, which has degree O(n/∆ b ) by Property 3(ii). Furthermore, we may have to change the status of as many as O(∆ 1−b ) high vertices by Property 3(i). According to Lemma 8, the cost of these vertex updates is O(M [∆, n, q | n, m]/q + n/∆ b + ∆ 1−b q). Open Problems Our work opens up many interesting directions for further research. For subgraph connectivity, an obvious question is whether the O(m 2/3 ) vertex-update bound can be improved (without or with FMM); as we have mentioned, improvements beyond √ m without FMM are not possible without a breakthrough on the triangle-finding problem. An intriguing question is whether for dense graphs we can achieve update time sublinear in n, i.e., O(n 1−ε ) (or possibly even sublinear in the degree). For geometric connectivity, it would be desirable to determine the best update bounds for specific shapes such as line segments and disks in two dimensions. Also, directed settings of geometric connectivity arise in applications and are worth studying; for example, when sensors' transmission ranges are balls of different radii or wedges, a sensor may lie in another sensor's range without the reverse being true. For both subgraph and geometric connectivity, we can reduce the query time at the expense of increasing the update time, but we do not know whether constant or polylogarithmic query time is possible with sublinear update time in general (see [1] for a result on the 2-dimensional orthogonal special case). Currently, we do not know how to obtain our update bounds with linear space (e.g., Theorem 1 requires O(m 4/3 ) space), nor do we know how to get good worst-case update bounds (since the known polylogarithmic results for connectivity under edge updates are all amortized). Also, the queries we have considered are about connectivity between two vertices/objects. Can nontrivial results be obtained for richer queries such as counting the number of connected components (see [1] on the 2-dimensional orthogonal case), or perhaps shortest paths or minimum cut?
2008-08-07T15:16:15.000Z
2008-08-07T00:00:00.000
{ "year": 2008, "sha1": "c3337c3396d297323f631b08418a0b1818f99c00", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0808.1128", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "57182978cee369d4a0d2c0d52ed9916c5b3a4b78", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
230544229
pes2o/s2orc
v3-fos-license
The Danish ‘ghetto initiatives’ and the changing nature of social citizenship, 2004–2018 This article critically examines the Danish ‘ghetto initiatives’ of 2004, 2010, 2013 and 2018, with a particular focus on their implications for ‘social citizenship’. Its approach is twofold: firstly, it explores how each of the four major ghetto initiatives constructed ghettos and their residents as a problem for the welfare state, and what policy measures were proposed to address the problems identified. Secondly, it examines the legislative changes that resulted from each of the ghetto initiatives and assesses their implications for social citizenship. In doing so, it relates its findings to the different developmental stages of social citizenship in Danish welfare state history. The article argues that the ghetto initiatives have led to an unprecedented spatialization and ethnicization of social citizenship which mark a radical departure from the guiding principles of post-1945 Danish welfare thought and practice. Introduction For more than two decades, 'ghetto' has been an official policy term in Denmark. Since 2010, the Danish government has used changing sets of criteria according to which it publishes annual 'ghetto lists', defining areas that are deemed to present a concentration of social problems. In these areas, special legal provisions apply concerning crime prevention, integration, data protection, welfare and the allocation of public housing. The government's official use of such a historically loaded term as 'ghetto' has led to the Danish ghetto lists being widely discussed both within Denmark and beyond its borders. But the measures adopted as part of Denmark's 'ghetto initiatives' have also drawn media attention across the world. A 2018 initiative, for example, made it a legal obligation that children living in specific neighbourhoods attend at least 25 hours of mandatory day care in Danish institutions from the age of twelve months. The same initiative also allowed for a doubling of criminal penalties in ghetto areas. These measures were widely condemned as discriminatory and in violation of established principles of modern liberal democracies (Barry and Sorensen, 2018;Bendixen, 2018). This article casts its spotlight on the ways in which Denmark's ghetto initiatives reflect changes in official policy approaches to social citizenship in the Danish welfare state. It proceeds in a twofold manner: firstly, it explores how each of the four major ghetto initiatives constructeded ghettos and their residents as a problem for the welfare state, and how these arguments in turn were used to justify a range of policy meaures aimed at targeting specific groups of individuals. Secondly, it examines the substantive changes that the ghetto initiatives brought to the status and rights of ghetto residents and welfare recipients and assesses their implications for the 'Danish model' of social citizenship. The guiding research question for this article is: What do the Danish ghetto policies reveal about changes in the nature of social citizenship in Denmark? In response to this question, the article argues that the ghetto policies of the past twenty years indicate a profound change to established Danish interpretations and practices of social citizenship in the postwar era. This change is seen in the way in which social citizenship is increasingly conceptualized and shaped in relation to geographic entities (spatialization), which in turn are defined, inter alia, by ethnicity (ethnicization). While the official argument, as we shall see, is that the policies are aimed at promoting 'equality', the policies introduce a status differentiation that deviates from the ways in which citizenship equality was understood for most of the twentieth century. For the purposes of this article, 'social citizenship' is defined as that aspect of citizenship that concerns the relationship between the individual and public social policies, based on a set of rights and obligations in the citizen-state relationship. Citizenship, in turn, is defined as the status of being a full member of a national polity. This definition is an example of what is sometimes referred to as 'narrower' definitions of social citizenship, derived from Marshall's influential work Citizenship and Social Class (1953;Powell, 2002). Other, broader, definitions of social citizenship have stressed the 'relational' aspect of citizenship by looking at an individual within the totality of their social existence, from the regional to the global (Lister, 2007;Karolewski, 2013). The purpose of this article is to assess how the relationship between the individual and the welfare state has been constructed in government policy. The narrower definition set out above will therefore be the most useful in the context of this article. Methodologically, the article combines policy and legal analysis with conceptual history. The article employs a critical policy analysis with the aim of uncovering the explanatory and normative claims behind Denmark's ghetto initiatives (Bacchi and Goodwin, 2016). Analysing the reasoning and justifications for the proposed measures in the four ghetto strategy papers allows us to assess how social citizenship has been conceptualized in relation to 'ghettos' over the past twenty years. In providing a structured account of the successive legal changes brought about by each ghetto initiative, the article then traces substantive changes in the particular individual rights and entitlements associated with social citizenship. Throughout, the article relates its findings to the more long-term history of the concept of social citizenship in Denmark. This combination of approaches is particularly well-suited to developing a nuanced understanding of social citizenship in the context of Denmark's ghetto initiatives, because changing conceptions of social citizenship are often deeply intertwined with changes to rights and obligations in practice, and both need to be seen in a long historical perspective. The main sources of this article are the government's four 'ghetto strategy papers' (Regeringen, 2004(Regeringen, , 2010(Regeringen, , 2013(Regeringen, , 2018. The four strategy papers are the key policy documents behind the Danish ghetto initiatives, and they outline in detail the proposed measures, why the government deems them necessary, and how both 'ghettos' as such as well as the residents within them relate to the Danish welfare state. The strategy papers therefore allow us to trace changes over time in official understandings of, and approaches to, social citizenship in the specfic context of the ghetto initiatives. In addition, the article draws on legislative materials following from each of the four ghetto initiatives whenever it serves to highlight the nature of the policies more clearly, or where specific proposals from the ghetto strategy papers were amended or rejected by the Danish Parliament. This article contributes to the growing literature on social citizenship in the twenty-first century (see for example Evers and Guillemard, 2012;Johansson and Hvinden, 2007). So far, the Danish ghetto initiatives have not yet been subjected to much scholarly review. Existing research has focused mainly on their role in wider political discourses concerning immigration, ethnicity and multiculturalism (Freiesleben, 2016;Lewenhaupt, 2018). Other scholary publications have focused on the justifiability, coherence and efficiency of the ghetto initiatives in light of empirical evidence (Skifter Andersen, 2007) and the growing political opposition against the measures (Bach, 2019). A recent report by the Nordic Council of Ministers offers a comparative analysis of Nordic anti-segregation measures launched in 2018 (Staver et al., 2019). However, the distinct relationship between the Danish ghetto initiatives and changing official ideas and constructions of social citizenship has not yet been explored in any detail. The article is divided into six main parts. It starts with (1) a brief exploration of how social citizenship has been understood and developed throughout Danish history. Section (2) then provides a brief overview of the historical background to the ghetto policies, and in particular traces the use of the term 'ghetto' in Danish political discourse since the early twentieth century. Sections (3) to (6) examine each of the four Danish ghetto initiatives launched between 2004 and 2018. Following the approach outlined above, each ghetto initiative is analysed in regard to its characterization of ghettos and their place in the welfare state, the key measures adopted in the context of each policy, and its wider implications for social citizenship. The article also relates the Danish ghetto initiatives to comparable strategies across Europe and, in concluding, assesses whether the ghetto strategies represent a paradigm shift for Danish social citizenship. The evolution of social citizenship in Denmark This section examines how 'social citizenship' has been understood conceptually and how its contours have changed throughout Danish history. We can roughly identity four historical stages in the development of Danish social citizenship (Mouritsen, 2015: 83-87). The first stage encompasses the first comprehensive social welfare laws between the 1890s and the 1920s. The period was driven, as in other European countries, by fear of revolutions, political pragmatism and conservative philanthropy. While some social rights were granted, the idea of social citizenship was not yet the centre-point of political concern. Nevertheless, these early laws laid an important foundation in removing some of the stigma associated with earlier poor laws and in granting some welfare provision without the loss of civil and political rights (Petersen et al., 2010). The social reforms of the 1930s, by contrast, were strongly shaped by social democratic ideas and driven by visions for equal and individual social rights (Petersen et al., 2011). The reforms were guided by a desire to remove more fully the stigma associated with welfare provision. These visions for social citizenship were premised on the idea of realising an individual's full membership of the national polity through status equality. During the interwar period, rights to various forms of social welfare became more comprehensive and enforceable. While most schemes remained highly means tested, only to be expanded after the Second World War, the ideas developed during the interwar period were to form the basis for the distinct type of socio-liberal social citizenship that evolved in the Nordic states during the twentieth century. The third developmental stage commenced after the Second World War. This period, widely described as the 'golden era' of the welfare state, marked the consolidation of the Nordic model. This consolidation was facilitated by a period of economic growth and low unemployment rates. The Danish welfare system became defined by its universalism, generous benefits and a high degree of redistribution (Kildal and Kuhnle, 2005). It was also characteristically individualistic, tying social rights to the individual rather than the family unit. Within the welfare state framework, individuals were seen as equal, and the creation of welfare institutions shared by all members of society was to ensure an absence of stigma and promote individual self-respect (Mouritsen, 2015: 13, 85). Welfare institutions were expanded during this period, including in the field of childcare, enabling women to achieve high levels of labour market participation. But gradually, women were also expected to participate fully in the labour market. The Danish welfare contract began to rest on the idea that, as a rule, all individuals, regardless of gender, should support themselves through labour market participation and contribute to the community through income tax (the Nordic 'work line'). Towards the end of the twentieth century began the 'fourth phase' in the history of social citizenship. Rising immigration numbers and periods of economic stagnation and high unemployment put the social welfare system of the 'golden era' under pressure. Benefits were reduced in value, and in political discourse the emphasis shifted towards the role and duties of each citizen in the larger welfare state context (Jønsson and Petersen, 2013: 170). The ideal welfare state citizen was the 'active' citizen who contributed to society through labour, or at least demonstrated the willingness to acquire the skillset needed to do so. The 1990s saw the introduction of an array of conditionalities into welfare provisions with a focus on 'capacity-building' and 'workfare'. Social citizenship became more 'contractualized', with individual activation plans agreed upon between the state. This, in practice, gave wide discretion to 'street level bureaucrats' (Lipsky, 1980), who individualized the conditions that needed to be met in order for individuals to qualify for various types of benefits. A key driving force behind these changes in the conception and practice of social citizenship towards the end of the twentieth century was immigration. More than any other group, immigrants and Danes of 'non-Western' origin were scrutinised as to their contribution to the welfare state and the wider national community (Jønsson and Petersen, 2012). Participation in society began to be seen no longer exclusively in terms of work and tax contributions, but in a deeper engagement with the culture, language and 'values' of Denmark (Mouritsen, 2015: 64-69). Full social citizenship became more difficult to access for those first entering the welfare system, with qualifying periods of seven (and later nine) years introduced for example for social assistance, with individuals only being eligible for a lower 'intregation benefit' during the qualifying period. Many access requirements revolved around demands for cultural 'integration', and the Danish family ideal was explicitly promoted by way of child benefit caps at two children. Overall, these measures resulted in a 'culturalization' of social citizenship, with ethnic minorities the de facto addresses of tighter provisions, although these measures were not legally confined to them (Jønsson and Petersen, 2013). The developments during the 'fourth stage' led to changes in the postwar model of social citizenship. The tighter monitoring of welfare recipients meant that the ways in which social citizenship was exercised weakened existing civil and political rights (Magnussen and Nilssen, 2013). The more encompassing focus on citizen duties to extend to a deeper engagement with society meant that the characteristic Nordic socio-liberal citizenship, in part, became more republican. As far as benefits were reduced and self-responsibility emphasised, this also signalled a shift towards a 'libertarian' conception of citizenship. However, scholars are agreed that the dominant citizenship model in Denmark remained the socio-liberal one developed in the twentieth century (Johansson and Hvinden, 2007, 223;Mouritsen, 2015: 32). And, despite making initial access more difficult and placing greater emphasis on the duties of welfare recipients, one guiding principle remained that, once an individual was a full member of the community of 'social citizens', individuals were treated as equal in status. The ghetto policies, however, are in the process of radically altering this idea of social citizenship. The invention of the Danish 'ghetto' The use of the term 'ghetto' in political discourse has changed considerably throughout Danish history, as have official policies surrounding segregated areas. In the late nineteenth and early twentieth centuries, there had been various different geographic areas in Denmark with comparatively high proportions of immigrants. However, in public discourse, the term 'ghetto' was only used in relation to one of them, the Borgergade-Adelgade Quarter in Copenhagen, an area predominantly inhabited by Russian Jews (Freiesleben, 2016: 111-113). The policies concerning segregated areas in the early twentieth century focused mainly on the refurbishment and demolition of housing. Yet while the term 'ghetto' was widely used in media and wider public discourses at the time to refer to this area, it was not used at the official policy level. In its current meaning, the term 'ghetto' did not enter political debate until the 1960s, following the first larger postwar immigration movements. 'Ghettoization' had strong negative connotations and came to refer almost exclusively to areas inhabited predominantly by immigrants. The background to these debates over ghettoization was that immigrants -or 'guest workers', as they were referred to -frequently settled in Denmark's subsidized public housing estates, while better-earning individuals were abandoning these areas. This led to high concentrations of non-ethnic Danes in specific localities. However, since it was assumed that 'guest workers' would return to their home countries, many deemed the problem to be only temporary. As it became clear that many immigrants were going to stay in Denmark, however, areas with high proportions of immigrants were viewed with growing concern by politicians, in particular on the right of the political spectrum. By the mid-1990s, the topic of segregation had reached the centre of political debate and the term 'ghetto' began to be used across the party spectrum (Freiesleben, 2016: 118). A number of political initiatives followed that aimed at alleviating some of the social and economic problems associated with areas of high immigrant populations. At this stage, however, the term 'ghetto' was not yet used widely in official policy documents and was met with some scepticism in the political arena (2016: 119). By the late 1990s, however, the government expanded its policies in the area of 'vulnerable' housing areas and began to use the term 'ghetto' in official policy papers. Despite not being clearly defined, the term 'ghetto' was now a commonplace in political debate and policy rhetoric, and ghettoization was widely accepted to be taking place (Freiesleben, 2016: 94). The welfare state and the ghetto: The early initiatives In 2000, the Social Democratic government increased financial resources to counter 'ghettoization'. This was accompanied by a series of legislative measures focussed around Denmark's public subsidized housing sector, which the government had identified as the main locus of Denmark's social problems. The Danish public housing sector is one of the largest in Europe, making up around 22% of the housing mass (OECD, 2020). For this reason, the Danish government was able to implement its policies widely and across the country by changing the laws regulating public housing. Denmark's public housing is run by housing associations and open to all residents of Denmark via a waiting list scheme. The housing associations receive public funding, which in turn gives the municipalities allocation rights for up to 25% of available housing to alleviate social hardship. One of the central aims of the government's early policies was to bring about a more 'balanced composition of inhabitants'. This was to be achieved through the introduction of 'flexible' letting rules, under which housing associations could give certain groups of applicants preferential treatment, for example wage-earners and students. The government argued that these new rules would allow housing associations to counter negative developments in 'problem' areas by attracting individuals with a stronger attachment to the labour market (Folketingstidende, 1999(Folketingstidende, -2000. From 2001 onwards, Denmark was led by a centre-right coalition government propped up by the right-wing populist Danish-People's Party (DPP). This government -partially due to its dependence on the DPP -brought about a profound shift in the country's immigration policy, with an increased focus on assimilating immigrants and individuals of 'non-Western' origin into Danish society and on tightening access routes to residence and citizenship. Its hostile attitudes towards immigration and multiculturalism were part of an overall shift towards more xenophobic rhetoric and regulation across Western countries following 9/11. Ethnic minorities were increasingly portrayed as a threat to Western democracies, and the early 2000s saw the introduction of a wide array of 'preventative measures', limiting ethnic minoritity rights in the fields of policing, naturalization and immigration, across Europe (Kaya, 2009;Vertovec and Wessendorf, 2010). Within the political discourses of this time, 'ghettos' came to refer more explicitly to Muslim and non-Western immigrant communities and were cast as a threat to the 'social cohesion' of the welfare state (Jønsson and Petersen, 2013;Peters, 2014). In 2004, the Danish Prime Minister Anders Fogh Rasmussen announced a comprehensive plan to combat ghettos in his New Year's speech. Rasmussen linked ghettos closely to immigration, stating that .] Ghetto formation leads to violence and crime and confrontation. We know this from abroad. And we neither can nor will accept this in Denmark (Rasmussen, 2004; translation by the author). What followed in May that year was the government's first comprehensive strategy paper on ghettos. Entitled 'The Government's Strategy against Ghettoization' (Regeringen, 2004), the paper set out a series of measures for countering social problems in 'ghettoized areas', and for preventing further ghettos from emerging. The government paper roughly identified eight areas across Denmark as ghettos. It identified these areas based on the following 'indicators': a high proportion of adult residents living on transfer payments, low education levels, a dominance of subsidized housing estates, 'asymmetric moving patterns', and an overall lack of investment (Regeringen, 2004: 15). The paper did not list ethnicity as an explicit indicator for ghettoization. Nevertheless, as we shall see, the ghetto strategy paper revolved strongly around ethnic minorities. To the government, one of the key problems with ghettos was their place in the welfare state. The paper argued that ghettos showed a stark deviation in terms of residents' contribution to the welfare state when compared to other parts of the Danish population. As such, ghettos were seen as a potential threat to social cohesion and the functioning of the welfare contract. The main problem was identified as the high concentration of unemployment in these areas. Ghettos were seen as a deviation from the Nordic 'work-line' and a risk factor in generating a 'culture of unemployment' that discouraged individuals from becoming -or returning to be -contributors to the welfare state. Accepting such places of deviance could, in the government's reasoning, lead to a disintegration of society (Regeringen, 2004: 11). The strategy paper offered a number of explanations for the deviation from the Nordic 'work-line' in ghettos. The main reason it identified was a lack of awareness of 'Danish values'. This lack of awareness was caused by a concentration of immigrants who often brought with them a work ethic that did not match the Nordic model, in particular with regard to women's employment. The paper almost exclusively framed deviations from the Danish 'work line' as an immigrant problem and did not address unemployed 'ethnic Danes'. The ghetto paper proposed that ghettos be converted into places that promote labour market participation, which in turn was deemed essential in enabling individuals to become part of society 'on an equal footing with others' (Regeringen, 2004: 11). Because the government had identified ethnic difference as a key explanation for high unemployment, the main policy focus of the ghetto initiative was to achieve a greater exposure of ghetto residents to 'Danish values'. This was to be achieved by way of a 'more balanced composition' of residents, which would facilitate contact with 'Danes', make residents engage with the Danish language, and enhance their understanding of those 'norms and values that count here' (Regeringen, 2004: 12). The main policy instrument chosen was 'social mixing', a housing policy widely popular across Europe at the time (Phillips, 2010: 211;van Gent et al., 2018). While the social mixing paradigm has become increasingly contested following studies suggesting it is ineffective, it has been a powerful driver of European housing policy during the past few decades (Jepsen and Nielsen, 2018). In pursuing its aim of social mixing, the government focused on the allocation of public housing, in line with its earlier measures in the late 1990s. The 2004 strategy called on municipalities and housing associations to use their existing allocation powers for public housing in order to counter ghettoization. The 2004 strategy urged municipalities to take into account the nature of the particular area when allocating individuals with pressing social housing needs, meaning that 'weak resource' individuals should be allocated to areas with fewer social problems. The strategy also encouraged housing associations to engineer the composition of residents more actively by utilizing the possibilities for 'flexible letting' based on criteria such as employment, education and income (Regeringen, 2004: 23). In addition, the strategy aimed at introducing a number of new allocation powers for subsidized housing in ghetto areas (Regeringen, 2004: 27-45). It proposed a new model -'combined letting' -which could be instituted in ghetto areas by agreement between the respective municipality and housing association. Under these new provisions, municipalities and housing associations could agree to reject welfare recipients already on a waiting list for subsidized housing, if their moving into the estate was deemed to increase the hardship of the area. The individuals rejected from waiting lists in a particular area would instead need to be allocated a flat in a different location within six months of the rejection notification. The paper also proposed that no new welfare recipients should be added to public housing waiting lists of areas at risk of ghettoization. The government's emphasis on promoting the interaction of 'ghetto residents' with 'Danes' was also expressed in further initiatives proposed in the ghetto strategy, which concerned cultural integration, crime prevention and schooling. Schools were to be permitted to promote a balanced ethnic composition, and to reject students of non-Danish ethnicity, if the schools were already deemed to be 'overburdened'. These measures have to be seen in the context of other 'cultural integration' policies of the time. In 2004, for example, the Danish legislator had introduced mandatory 'language stimulation' for bilingual children, whose Danish was deemed insufficient at the age of three. This involved 15 hours of weekly Danish language training. While this measure was not directly connected to the ghetto strategy, it forms part of a series of measures that are aimed at increasing (future) labour market participation among nonethnic Danes through greater exposure to Danish language and culture. Finally, the proposal contained plans to allow for council houses to be repurposed for businesses to make vulnerable areas more vibrant, and to sell a number of publicly owned houses to create a private property market. All of the measures proposed in the 2004 strategy paper were subsequently implemented by law. The 2004 strategy marked the beginning of an increasingly spatial conceptualization of social citizenship. The strategy explicitly focused on groups of citizens in certain localities rather than individual performance within the welfare state. This meant that individuals were subjected to a greater degree of scrutiny due to their place of residence, regardless of their individual employment status. The spatial conceptualization of social citizenship was also reflected in how the state began to interfere more actively with the residence preferences of welfare recipients. It limited the possibilities of welfare recipients to realistically access specific housing localities due to its encouragement of preferential waiting lists for wage earners and other resourceful groups. This signalled a new -and very indirect -type of activation policy, which rested on the assumption that an individual's location of residence had an impact on their labour force participation. Although ghettos were not identified directly on the basis of ethnicity in 2004, the problems and ideals expressed in the strategy revolved strongly around culture and ethnicity. The 2004 paper set out a 'Danish' ideal of social citizenship and defined areas in which individuals overall fell short of it. Throughout the ghetto strategy paper, there was an implicit understanding of a particular value set defining Danish social citizenship, accompanied by an assumption that ethnic minorities had a strong disposition to fulfilling their social citizenship duties. The 2004 paper stated that equality was a guiding principle of the ghetto initiative: every citizen was, in future, to receive 'equal opportunities to participate in and contribute to society's growth and welfare' (Regeringen, 2004: 11). As such, the ghetto strategy could be interpreted as a measure aimed at facilitating equality. But in line with the increasingly assimilationist stance on integration seen in Denmark and across Europe in the early 2000s, this was a formalistic idea of equality that did not take into account social, economic and cultural realities on the ground. What is more, the vision of future equality that the strategy claimed to advance was accompanied by a process of eroding equality in the present. What justified this, in the view of the government, was the insufficient performance of ghetto residents within the welfare state. This firmly placed responsibility for low labour market participation with ghetto residents themselves, making their situation seem like a choice or individual failure while overlooking the wider structural biases that worked against these individuals in society as a whole (Grünenberg and Freiesleben, 2016). This conceptualization would later be used more explicity to advance more targeted and punitive measures against ghetto residents and limit their rights further. Ethnicizing the ghetto The 2004 strategy had been a testing ground for targeted measures in designated ghetto areas. In October 2010, the centre-right Danish government issued a new ghetto strategy entitled 'Returning the Ghetto to Society -A Reckoning with Parallel Societies in Denmark' (Regeringen, 2010). Unlike the 2004 strategy paper, the 2010 paper set out a precise ghetto definition that was subsequently laid down by law. Based on this definition, the government would go on to issue an annual 'ghetto list'. A ghetto was now defined as a public housing residential area with a minimum of 1,000 inhabitants, to which at least two of the following applied: The share of immigrants and descendants from non-Western countries exceeds 50% The share of individuals between 18 and 64 year of age outside the labour market or education exceeds 40% The number of criminal convicts exceeds 270 per 10,000 residents Across Denmark, 29 areas met this definition, a marked increase from the eight ghetto areas identified in the 2004 plan. The introduction of a legal definition meant that targeted measures could now become binding and legally enforced in these areas. This also marked the beginning of a more punitive orientation of the Danish policies compared to those of other countries. Most significantly, 'non-Western immigrants and descendants' (emphasis added) was a novel criterion used in the identification of 'ghettos' that was unknown in other countries (Staver et al., 2019: 13). With the inclusion of ethnicity as a defining criterion for ghettos, the ghettos were framed explicitly in terms of ethnic and cultural problems rather than socio-economic problems alone. Compared to 2004, the 2010 strategy paper was also more detailed in its explications on the relationship between the ghetto and the welfare state. It contained a separate chapter entitled 'Away from passive receipt of public welfare' (Regeringen, 2010: 26ff). The relationship between ghettos and the welfare state was presented as the consequence of a fundamental cultural incompatibility, turning the welfare state into an argument for a comprehensive policy of cultural assimilation. The 2010 initiative was much more detailed than its predecessor, proposing 32 different policy measures. The aim, however, remained that of achieving a more 'balanced composition of residents'. The fact that in some areas six out of ten individuals were from 'non-Western' backgrounds was deemed 'unacceptable' (Regeringen, 2010: 15). The strategy went beyond the 2004 initiative by proposing that refugees and individuals from non-EEA countries could not be allocated by municipalities to areas defined as a ghetto. Moreover, the 2010 plan further weakened the position of welfare recipients. In addition to the rules of the 2004 framework, according to which municipalities could not allocate recipients of social assistance to ghetto areas, those on unemployment benefits, sickness benefits or early retirement schemes were now also to be excluded (Regeringen, 2010: 16). A further proposal, which was later abandoned, was that ghetto residents would not be eligible for family reunification. Besides housing, a further focus of the strategy was on activation measures aimed at ghetto areas and an increased focus on enforcing obligations. There were, for example, to be more Jobcentres in ghettos, aimed at facilitating labour market integration. Another focus of the 2010 ghetto plan was on children. One aspect of this was a new law under which bilingual children had to be enrolled in mandatory Danish daycare at the age of three for 30 hours per week (an increase from the 15 hours under the previous rules) if their Danish was deemed of insufficient proficiency. While this applied across the country, the ghetto paper stressed that this would need to be particularly vigorously enforced in ghettos. Municipalities were to sanction parents by reducing their child allowance if they did not comply (Regeringen 2010, 21). Finally, the government intended to sharpen measures against social fraud and proposed 'systematic inspection efforts' (Regeringen, 2010: 32). The 2010 strategy reinforced the spatial divide of citizens commenced by the 2004 strategy. It led to more groups being excluded from municipal allocation and from public housing waiting lists. Municipalities had to ensure that an individual would be allocated housing elsewhere within six months if they were rejected from a public housing waiting list in a ghetto. While as a whole, access to public housing was therefore not substantially diminished, the possibility of waiting times and the overall reduction of available housing for the individuals concerned still meant a reduction of their opportunities within the public housing sector. As in the 2004 strategy, this was justified as a type of indirect activation policy, according to which keeping specific groups of individuals away from specific localities was deemed to facilitate their labour market integration. The most profound change following from the 2010 strategy was that ethnicity became a legal criterion in the identification of ghettos. Underlying this was an assumption that immigrants and individuals of 'non-Western' descent were less likely to fulfil their role within the welfare state. As in the previous strategy, problems concerning welfare state contribution -and policy measures developed to address them -were linked to specific localities. But with ethnicity becoming a very criterion for defining these spaces, the spatial and ethnic were coupled. The 2010 ghetto strategy thus signalled an explicit move towards an ethnicized understanding of social citizenship. Interlude: The 2013 ghetto plan In 2011, a Social Democratic government assumed power. This government disagreed with its predecessors' ghetto criteria -in particular the criterion of ethnicity. It also took issue with the term 'ghetto' more generally and considered removing it from official vocabulary (Freiesleben, 2016, 164). In May 2013, the government presented its new strategy paper 'Vulnerable Housing Areas -The Next Steps -The Government's Strategy for a Strengthened Initiative' (Regeringen 2013), in which it outlined a new set of ghetto-criteria aimed at making the lists more nuanced. Despite opposition from the conservative and right-wing parties, the legal ghetto definition was expanded from three to five criteria to include income and education. The two additional criteria were: The share of residents between 30 and 59 years of age without an occupational education exceeds 60% The average gross taxable income for individuals over the age of 15 is less than 60% of the regional average For an area to count as a ghetto, a residential area comprising 1,000 residents had to meet three out of the five ghetto criteria. Despite the Social Democrats' attempts to distance themselves from their predecessors' ghetto policies, however, the underlying rationale of the ghetto policies remained the same. Ghettos continued to be portrayed as a threat to the individuals inside them as well as to overall social cohesion, and apart from changing the definition of a ghetto, the new initiative proposed only cosmetic changes to the legal tools available. The 2018 ghetto initiative and the reshaping of the welfare contract In 2018, the Danish centre-right government launched a new ghetto strategy. The plan, entitled 'A Denmark without Parallel Societies -No Ghettos by 2030' (Regeringen, 2018), was the most radical ghetto initiative yet. It proposed to get rid of ghettos 'once and for all' by 2030 (Regeringen, 2018: 6) and introduced a range of new policy measures in order to achieve this aim. As a result of the 2018 initiative, the ghetto definition was changed once again. Residential areas could now fall into three different categories: vulnerable housing areas, ghettos, and hard ghettos. A vulnerable housing area met two out of the following four criteria: The share of residents between 18-64 of age outside labour market exceeds 40% The share of criminal convicts exceeds 2.7%. The share of residents between 30 and 59 years of age with no more than primary school education exceeds 60% The average gross taxable income for individuals between the age of 15 and 64 is less than 55% of the regional average A ghetto, in turn, was an area that: Meets the criteria of a 'vulnerable housing area', and The share of immigrants and descendants from non-Western countries exceeds 50% By way of the new definitions, ethnicity had become an essential criterion in identifying ghetto areas. This identification based on ethnicity was intended to allow government strategy to focus on the 'distinct challenges in ghetto areas based on a lacking integration of immigrants and their descendants from non-Western countries' (Folketingstidende, 2018(Folketingstidende, -2019. A hard ghetto, finally, was any area that had been on government's official ghetto list for the past four years. The ghetto strategy paper employed stronger rhetoric than the previous ones, emphasizing the unsustainability of tolerating ghettos within the Danish welfare state. Overall, the measures adopted were significantly more far-reaching than those of previous initiatives. The new measures for achieving a 'balanced composition of residents' in vulnerable housing areas included prohibiting municipalities from allocating welfare recipients to social housing in such areas and rendering their residents ineligible for family reunification. Moreover, housing associations in vulnerable housing areas were obliged to introduce preferential treatment for wage earners, individuals in education, in an apprenticeship or individual who have been self-sufficient for more than six months. In 'hard ghettos', housing associations would be obliged to reject recipients of integration allowances or social assistance from waiting lists. Moreover, the paper proposed that individuals would have their level of social assistance reduced if they moved into hard ghetto areas. It is important to note that while the 2018 strategy built on earlier instruments, the law was changed to make use of these instruments mandatory for municipalities and housing associations. The 2018 strategy also focused on the physical restructuring of ghetto areas. For 'hard ghettos', the government introduced a requirement that municipalities and housing associations produce a development plan on how the percentage of subsidized housing properties could be reduced to 40. If no feasible plan was developed to the satisfaction of the responsible government ministry, the government would take over the properties in question and privatize or demolish them. A further significant measure of the 2018 plan was the parental obligation to enrol in Danish-language day care all children from the age of 12 months (paid by the state), with benefit sanctions being the consequence if parents did not comply. Notably, this new rule only applied to vulnerable housing areas and was not a general rule across Denmark. Almost all of the measures proposed in the 2018 paper were adopted by law. The only measure abandoned during the legislative process was the reduction of social assistance for individuals who moved into ghetto areas. The intricate sub-differentiation between vulnerable housing areas, ghettos and hard ghettos and the various measures of the 2018 initiative continued the previous development towards a spatialization and ethnicization of social citizenship. The 2018 strategy was the first strategy to introduce special conditionalities for individuals in vulnerable housing areas (in the form of mandatory day-care for children from the age of twelve months). This meant that residents of ghettos now carried a different set of duties in return for social rights than those outside. The mandatory nature of many of the allocation instruments for municipalities and housing associations further sharpened the meaning of space in the promotion and enforcement of the welfare contract. With language requirements and exposure to 'Danish' culture a key aspect of full access to child benefits, the 2018 initiative also continued the culturalization of social rights. Overall, the Danish ghetto policies are much more punitive and focused on ethnicity than those of its neighbouring countries -something which the 2018 initiative confirmed. Both Sweden and Norway also launched anti-segregation strategies in 2018, but these were much more focused on 'enabling' measures that seek to counter the wide range of socio-economic disadvantages experienced by residents of segregated areas (Staver et al., 2019). In a Nordic and European comparison, the Danish 2018 initiative stands out as one that explicitly targets ethnic minorities and limits their rights based on culturalist attributions of responsibiliy for a lacking contribution to the welfare state. Conclusion Over the past two decades, the Danish ghetto strategies have changed the way 'social citizenship' is understood and shaped in Denmark. This article has highlighted how the four ghetto initiatives have gradually withdrawn full social citizenship status from ghetto residents. As we have seen, the immediate physical and social environment of an individual has begun to play an increasingly central role in defining an individual's place in the wider national welfare community. The ghetto strategies have introduced a spatialized citizenship ideal in which an individual is no longer viewed in relation to their individual contribution to the welfare state, but also in terms of their social and ethnic environment, as translated into geographic territories. In its most extreme manifestation, this has led to the curtailment of certain rights associated with social citizenship, not because of an individual's failure to comply with their citizenship duties, but because of the overall 'performance' of the area in which they reside. The most problematic marker of this spatialization is ethnicity. Ethnicity -rather than socio-economic factors alone -is seen in Danish policy as a key element of an individual's (potential) contribution to the welfare state. The curtailment of rights in specific localities is -by extension -justified on the basis that ethnicity as such constitutes a problem for the welfare state, due to an absence of 'Danish values' in these areas. While in most welfare states, curtailment of rights and the use of welfare sanctions tend to disproportionately affect ethnic minorities in practice, the Danish state has explicitly made ethnic minorities the target of such measures. Many policy measures of the Danish ghetto strategies can be found in countries across Europe. The 'social mixing' paradigm has been popular among European policy makers, although it has faced increasing criticism for being inefficient. But while attempts to facilitate 'social mixing' have been employed across Europe, the Danish ghetto policies are unique. In no other European country have legal 'zones' been created in which individuals need to comply with additonal demands in return for welfare benefits. The Danish policies are also much more explicitly concerned with ethnicity and immigration than those of other Nordic countries, for example. In justifying its ghetto strategies, the Danish government has argued that its aim is to promote equality. In doing so, it has drawn upon abstract notions of future equality in regard to both an individual's opportunities in, and contribution to, the Danish welfare state. But by introducing specific conditions for residents of particular areas, the ghetto strategies depart drastically from previous notions of social citizenship rights granted on equal terms. This futurist orientation of the ghetto policies erodes social citizenship in the present and idealises a future in which there will be no ghettos. The justification is based on the idea that ghetto residents need to improve their performance within the welfare state, and that their insufficient performance is a matter of choice or will. Ghetto-inhabitants are thereby constructed as 'undeserving' and 'unfinished' citizens who need to prove themselves worthy of (re-)gaining full social citizenship. Blame for specific socio-economic outcomes is overwhelmingly attributed to ghetto residents themselves, which sidelines the fact that many social outcomes are not the result of individual choice or attitude, but the wider structural discrimination faced by ethnic minorities. The consequence of the ghetto policies is that they allow for a differentiation among citizens and -ultimately -the attachment of stigma to certain citizens. It was one of the achievements of social citizenship as conceived in the twentieth century to promote social inclusion and remove social stigma from members of the welfare polity. The ghetto policies mark a drastic deviation from these core guiding principles of twentieth century welfare thought. Research has already shown how ghetto residents are stigmatized and how Danish society has become divided between ideas of 'us' and 'them' (Simonsen, 2016;Schultz-Larsen and Delica, 2019). Once ideas of differentiated citizenship status become more deeply entrenched among policy makers and in wider society, they are only likely to lead to further inroads into the ideals of equality that were once the hallmark of the Danish welfare state. Funding The author received no financial support for the research, authorship, and/or publication of this article.
2020-12-17T09:11:08.259Z
2020-12-13T00:00:00.000
{ "year": 2020, "sha1": "3b911dd129da2341c8f8e35987060189e204f42e", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0261018320978504", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "65281438a11f74de3a8c6e2171009e59802976fa", "s2fieldsofstudy": [ "Political Science", "Sociology" ], "extfieldsofstudy": [ "Political Science" ] }
225469887
pes2o/s2orc
v3-fos-license
Consumers’ Behavior in Selective Waste Collection: A Case Study Regarding the Determinants from Romania The increase in consumerism due to population growth, excessive advertising and the constant encouragement of buying behavior by advertising media and opinion formers comes with side effects for the environment and public health if it is not properly supported by a sustainable selective waste collection process. In this context, the paper aims at determining the impact of different elements on people’s intention to participate in selective waste collection and on their behavior related to the collection process. Based on the literature, a series of variables were considered and a questionnaire was created in order to extract people’s opinions related to the selective waste collection process. As discrepancies in findings might appear due to culture in various countries, the analysis has been conducted with reference to Romania’s case. The waste collection situation in Romania is similar in some ways to other countries in the world, with some differences related to a small recycling rate compared to other countries in the European Union. In this context, it is important to identify the determinants of the consumers’ behavior in selective waste collection and to act based on these findings. Creating better policies that can support the selective waste collection process will have results in increasing the waste collection rate, offering a clear and safer environment to all the citizens. Introduction Selective waste collection is part of the municipal waste management processes and it involves the selective storage of recyclable waste (metal, glass, paper and plastic) from the wet biodegradable waste with the purpose of recycling it. Selective collection involves sorting the recyclable waste directly at the source, e.g., the waste sorting process is done directly by the person who generates it, while the transport of the two fractions (recyclable waste and biodegradable waste) is made by the sanitation operators. Selective waste collection involves the separation of the two fractions by the waste producer, stating that the dry fraction (glass, paper, plastic, metal) must be stored properly. Therefore, it is mandatory for the collected waste to be dry and clean as the storage of wet waste or traces of food scraps can contaminate whole batches of waste and may make their recycling impossible. Small and large appliances, batteries, textiles, used cooking oils and light bulbs can also be selectively collected. The difference between these latter types and the main types of waste is the collection method as some of the latter can endanger people's lives, so they require a specific collection infrastructure. There are two ways in which the selective waste collection can be made: door-to-door collection or voluntary collection [1,2]. Door-to-door collection is done by each waste producer and involves the existence of separate bins for the two fractions for collection in a mixture or separately in individual containers using one of the color codes such as: orange for plastic, yellow for metal, blue for paper, green for glass, black for biodegradable waste and red for e-waste. By using the color scheme, the persons producing the waste can easily identify the specific bin for each type of waste, preventing the selection of the wrong bin, and can easily participate in the waste collection process without too much effort as, in most of the cases, the containers are located close to their home. For the voluntary collection method, the storage containers do not belong to certain manufacturers and they are located in places as accessible as possible for producers who voluntarily want to collect selectively. Also, as with door-to-door collection, there can be two containers for selective collection in the mixture, or there can be one for all types of waste. In Romania, the municipal waste generated per capita has increased by 8.37% from 251 kg/capita in 2012 to 272 kg/capita in 2018 [3]. At the same time, the recycling waste rate reported for 2018 has been of only 11.1% of the generated waste, a value situated below the European Union average rate of 47% [3]. It is worth noting that the main method used for waste management in Romania is landfilling, followed by other forms of recovery and finally recycling. The landfilling of municipal waste is a method that should be carried out in maximum safety to prevent the entry in the soil of the chemicals resulting from the storage of waste. In the event of spills of hazardous chemicals into the soil, they can contaminate groundwater and soils in the region. Additionally, to the best of our knowledge, there is little evidence regarding the waste collection process in the case of Romania and its main triggers. Hansmann et al. [4] underlined in their study that due to discrepancies in cultural and socio-economic factors, it might happen that the findings related to the determinant factors would be different. In this context, the current paper aims at analyzing the determinants of the consumers' decision to participate in the waste collection process. This issue is of major importance as an increase in the volume of waste generation is estimated in the following years [5]. According to a World Bank report, in 2030, in Europe and Central Asia there will be 440 million tons/year of waste produced, while in 2050 the volume will increase to 490 million tons/year, compared to 392 million tons/year reported for 2016 [5]. Strictly related to global mismanaged plastic waste generation, Lebreton and Andrady [6] show that without proper plastic waste management in 2060, the amount of plastic generated worldwide would be close to 220 million tons/year, with 340% more than in the situation in which a proper waste management plan is used. Creating better policies that can support the selective waste collection process will have results in increasing the waste collection rate, offering a clear and safer environment to all citizens. The remainder of the paper is structured as follows: Section 2 provides an overview of the waste situation in Romania and discusses it in comparison with other countries in the European Union. Section 3 presents a literature review on the topic of waste collection and management and underlines the main determinants presented by the scientific literature as having a high impact on the consumers' decisions related to waste collection and recycling. Section 4 discusses the methodology associated with this study, highlighting the elements considered in the questionnaire and stating the main hypotheses of the study. Section 5 analyzes the results gathered using the questionnaire and discusses the hypotheses validation. The paper ends with concluding remarks and a discussion related to the limitations of this study. The Waste Situation in Romania The situation of waste in Romania is similar in some respects to the situation of waste around the world. Simultaneously with population growth, the evolution of technology and the increase in the number of daily activities of people, the amount of waste they generate has also increased. The redefinition of social norms and the innovations that appear more and more often have led to a major transformation of society. It has been observed that people have started to buy goods that they end up not using or that are not a necessity, to buy more food than they can consume, to want to own the latest technologies on the market. This might be a consequence of the influence manifested by the aggressive advertising campaigns, which make people end up buying any product around which is in an excessive promotion process. However, at the level of the European Union (EU), as well as at the level of Romania, there is a decrease in the quantity of municipal waste due to legislative changes and an increase in awareness of the impact of waste on the environment [7]. In the 2008-2018 period in the EU were generated on average 495 kg of waste per capita (Figure 1), of which 31.44% was deposited on the ground, 26.65% was recycled, 24.53% was incinerated, 14.67% was composted and 2.71% was treated by other methods [8]. As can be seen in Figure 2 and from the data offered by Eurostat [8], in the analyzed period 2008-2018, most of the generated waste was deposited on the ground, an average of 155.6 kg. The country that generated the most waste in 2018 was Germany, followed by France, Great Britain and Italy [8]. Romania ranks 11th place out of 28 and generated 5,296,000 tons of waste and 272 kg/capita [8]. The countries that have generated the least amount of waste are Malta and Luxembourg, with less than 400,000 tons, which might be connected to the small area and the smaller number of inhabitants compared to other EU countries. In Romania, there is a significant decrease in the amount of waste generated. It can be observed that from 2008 to 2018 municipal waste generated decreased by approximately 139 kg/capita, due to national and European regulations to reduce waste, the reduction of the amount of packaging used in various economic activities and the placing on the market of more environmentally friendly alternatives or ones which can be reused several times for the purpose for which they were produced before they lose their usefulness. Figure 3 shows a downward trend until 2012, then a slight increase in 2013, followed by a decrease in 2014 and 2015-the year in which the lowest value was recorded in the analyzed period. In 2017 and 2018, the amount was stabilized around 272 kg/capita (approximatively 5 million tons of waste) [8]. Figure 4 presents the municipal waste recycling rates in 2017 for the 28 EU countries. This indicator measures the percentage of recycled waste in the total amount generated in each country and includes material recycling, composting and anaerobic digestion. As can be observed in Figure 4, Germany is the country with the highest recycling rate (67.2%) in the EU. Based on this result, it can be stated that even if it is the country that generates the largest amount of waste, the country also manages properly a large amount of the generated waste. On the other hand, Romania is among the last two countries with respect to this indicator, scoring only 14% for the municipal waste recycling rate, while the EU average for municipal waste recycling is 46.2%. In 2018, the rate of the municipal waste recycling in Romania decreased by 3%, reaching 11%, which makes it almost impossible to reach the 50% target imposed by the European Union for each member country by the end of 2020. Among the objectives that Romania must achieve by the end of 2020 there are also recycling or reuse of over 70% of the amount of construction waste, recovery of 60% of packaging waste from the total placed on the market, annual collection of 4 kg/inhabitant of electrical waste, selective collection of biowaste for composting and reducing the amount of biodegradable waste deposited on the ground by 35% compared to the amount deposited since 1995 [9]. Failure to comply with EU rules has a major impact not only on the environment, but also on the economy because the failure of these targets will generate very high penalties to be paid to the European Union. Even though from 1 January 2019, the system "pay for how much you throw" has become mandatory, it is not implemented throughout the country. As a result, the amounts of waste that end up being deposited on the ground are fluctuating around 70% of total municipal waste- Figure 5. Although the price of waste landfill per ton increased by 266% in 2020 compared to 2019 [7] the territorial administration has not succeed to make available to all citizens a performant infrastructure for selective collection, even if the legislation imposes an obligation to take these measures. At this moment there are no concrete data to identify the rate of selective collection in Romania, but we can assimilate this rate to the recycling rate because the amount of waste recycled represents the amount of waste collected selectively and met all the specifications of selective collection. The situation in Romania is different from this point of view from other EU countries, as the recycling rate is much lower compared to the other states. The decrease recorded for 2018 can only make the target difficult to achieve. As can be seen in Figure 5, the main method of waste management in Romania is landfilling, followed by other forms of recovery and recycling. Landfilling of waste must be carried out in maximum safety so as to prevent entry into the soil of chemicals resulting from the storage of waste. In the event of spills of hazardous chemicals into the soil, they can contaminate groundwater and soils in the region. Landfills also release harmful gases into the atmosphere that contribute to questionable air quality for those living near them, especially in the hot season [10]. However, in Romania there are also examples of good practice that have complied with European regulations and have exceeded the target of 50%. Such an example is represented by the city of Târgu Lăpus , in Maramures , County. In 2013, the rate of selective collection in Târgu Lăpus , was 58.54%, and the amount of waste generated was 212 kg/inhabitant, 37 kg less than the national average [12]. For 2020 the aim is to reduce the amount of household waste deposited by 90% compared to the amount for 2010. Regarding the collection of waste, it should be mentioned that in this city, selective collection is practiced directly at the source. The situation of waste in Romania is an important and actual phenomenon for which solutions are sought throughout society in order to reduce and reuse the generated waste. As a result, we aim at identifying the factors that determine the consumers' selective collection behavior. Literature Review As the scientific literature related to this field is vast, we have extracted for the literature review a limited number of papers, trying to cover various approaches from different parts of the world. The literature review is discussed from three points of view: the variables considered in the research papers, the methods used in research and the tested hypotheses. Research Variables Considering the literature, it has been observed that various indexes have been used when discussing the determinants of the selective waste collection and engagement in the recycling process. As a result, 22 determinants have been identified, as shown in Table 1. Some of them have been listed directly in the mentioned papers (under the same or similar names), while for the other we have identified in the questionnaire used by the authors a series of questions which were referring to the presented determinants. Based on the data in Table 1, it has been observed that the most used variables in the papers addressing the consumers' behavior related to selective waste collection and recycling have been the Social Norms, Attitude, Perceived Behavioral Control and Selective Collection Intention. Table 1 shows a slight increase in the number of indices analyzed from year to year. With the development of technology and the advancement of research methods, researchers have begun to consider that the number of behavioral factors has increased. As a result, in order to conduct a valid study, one must consider as many influencing factors as possible. Among the factors that have recently begun to be considered in the research literature, one can name the social networks included in the Social Media indicator in the analysis, non-governmental organizations (NGOs), Global Warming, Government Measures and the possibility of storage conditioned by the availability of a space to allow the temporary storage of waste before it reaches the specially designed bins (Storage Space Existence)- Table 1. Table 1 also shows the countries for which the research was conducted, thus a high interest in Asia can easily be observed regarding the situation of waste in this continent, as more than half of the studies discuss countries from this part of the world. As for the variables' influence, there are situations in which most of the studies indicate a positive or a negative influence over the consumers' behavior, but there are also studies which prove the opposite. In order to have an adequate picture, we will discuss in the following the ideas within each variable and how it has scored in different studies related to selective waste collection and recycling. Social Norms Social norms derive from the way a person perceives that the people who matter to them or society in general would expect them to act. Social norms represent the types of behavior that society accepts or that other people expect the individual to adopt [27]. As previously mentioned, Social Norms is the variable found in most research. According to the study conducted by Amini et al. [23], which aimed to investigate the influence of economic instruments (taxation and reward) on household recycling intentions in order to help the Malaysian government to enforce the necessary recycling regulations, by applying several multiple regression analyzes, it was shown that the factor Social Norms has a significant impact on the intention to recycle, even though the impact was smaller than that of attitude. Mahmud and Osman [17] considered social norms in their study and found that there is a significant influence on recycling behavior. Valle [16] showed that social norms have a significant impact on both personal norms and on recycling behavior. For the case of young people, Halder and Singh [43] observed that social norms have the strongest and most significant impact on students' intention to recycle. To same conclusions were highlighted by Nduneseokwu et al. [31] in a study on the population of Nigeria. A positive relation was found even by Miliute-Plepiene et al. [28] in the case of Lithuanian persons, while for the Norway consumers, the results were not statistically significant. On the other hand, Boldero [13] determined that the norms did not have a significant impact on behavior or on selective collection intent. Social Media Due to the development of social media, the social influence manifested on these platforms has been analyzed in order to see whether it has an impact on the consumers' behavior in relation to the recycling and selective waste collection process [44][45][46]. Sujata et al. [37] analyzed the role of social media in consumers' recycling habits and established that social media usage has a positive, significant, but weak predictor of behavioral intention. The same results were obtained by Delcea et al. [42]. Attitude Attitude towards behavior represents the positive or negative feelings of the individual when adopting a certain behavior [45]. It is determined by an assessment of one's beliefs and of the consequences that result from a behavior, and it depends on the desirability of these consequences [26]. Halder and Singh [43] showed that attitude is the second strongest variable that positively influences the intention of young people to recycle. The positive and significant influence of the attitude on the intention to participate in the selective collection of e-waste has been underlined in the study conducted by Nduneseokwu et al. [31]. The authors showed that attitude is the third most significant factor influencing intention after social norms and environmental knowledge. A positive significant impact on the intention to recycle is found in Delcea et al. [42]. Even in this case, there are studies that found that attitude does not have a significant impact on the intention of selective collection at source, such as the study conducted by Nguyen [25] on the inhabitants of Hanoi, Vietnam. Even more, Ng [38] proves a significant negative relationship between attitude and the recycling behavior. Perceived Behavioral Control Perceived behavioral control (PBC) represents the beliefs of individuals regarding the difficulty and control of a specific behavior [47]. More specifically, PBC reflects two dimensions: the external conditions of a person that can increase or moderate the ability to adopt a certain behavior and the perceived ability to perform the behavior [16]. Perceived behavioral control was shown to have a positive and significant influence on consumer behavior in the study conducted by Strydom [36], having no significant influence on the selective collection intention. Similar results regarding the influence on consumers' behavior were found by Ajzen [48]. As for the influence on selective collection intention, a general opinion has not been reached yet: Chu and Chiu [49] found a positive relationship, while Boldero [13] and Halder and Singh [43] found no significant impact. Intention Intention measures the desire of a person to adopt a certain behavior and it is supposed to be a determinant of behavior [50]. Boldero [13] found a positive and significant impact of intention on selective collection behavior. Similar results were found in [36,42]. Convenience The term convenience refers to the ease of use related to the sorting infrastructure and the proper understanding of how to use it [51]. According to Boldero [13], convenience has a negative and significant impact on the selective collection intention. Ng [38] found a negative and significant influence on consumers' behavior, while Delcea et al. [42] found a reduced positive influence on the behavior of e-waste recycling. Some of the differences in the approaches might be because in some studies the questions included in the questionnaires referred to the inconvenience rather than to the convenience of engaging in certain activities. Government Measures The role of the Government in protecting the environment is very important. Government measures are represented by laws and regulations in the field of environmental protection to encourage the reduction of consumption [26]. Even for this determinant, the opinions do not emerge, with some of the authors finding a positive influence on the behavior [42], while others state that the impact is not significant [28]. Awareness Awareness refers to understanding the effects of waste on the environment and the emergence of a persistent public concern [26]. According to Meng et al. [34], awareness has the highest impact on the selective collection behavior among all the considered variables. A positive and significant impact has also been found in a study related to the determinants of recycling decision regarding the e-waste products [42]. Awareness is one of the factors regarding which different points of view have been encountered in the literature. Considering the Theory of Planned Behavior (TPB) discussed in the papers related to recycling, it can be observed that a series of approaches emphasize the role of awareness on the consumers' intention, further connected to the consumers' decision to recycle. For example, Davis and Morgan [52] state in a paper dealing with waste behavior in Bristol City that the increase of waste minimization awareness should contribute to future recycling levels and to reducing the total waste generated. Wan et al. [22] considered the awareness of consequences as an underlining factor for behavioral intention, but found that even though the connection was positive and significant, the influence was low. Furthermore, Kite et al. [53] underlined the fact that some variables, such as awareness, are casually linked to distant outcomes such as behavior through the connection of other variables such as attitudes, social norms, intentions. Meng et al. [34] included environmental awareness as part of the environmental attitudes with a direct influence on the residents' disposal behavior. Klockner and Oppendal [19] considered awareness as an influence factor for personal norms, which further had an influence on recycling habit and intention, both of which influenced the recycling behavior. Bezzina and Dimech [21] propose a model in which behavior is influenced by personal norms, whose effect is mediated by two factors: the awareness of consequences and ascription to responsibilities. On the other hand, a series of studies which have started from the Schwartz model of altruistic behavior [54] have considered the direct influence of awareness on behavior. Garces et al. [14] used in their model eight direct variables influencing the recycling behavior, one of them being awareness. Yahya et al. [26] tested in their paper a hypothesis sustaining that "there is a significant and positive relationship between public awareness and environmentally friendly consumer behavior" (p.3), concluding that "awareness appeared to have the highest positive significant relationship with environmentally friendly consumer behavior". Nevertheless, considering Grob's model of environmental behavior [55], environmental awareness was listed among the four direct determinants of general environmental behavior [16]. Responsibility In this context, responsibility is the obligation of people to perform the self-assigned objective to protect the environment and properly manage the waste they generate [18]. Responsibility positively influences the intention to recycle electronics among Romanians, but does not have a significant contribution [42]. Personal Norms Personal norms reflect individuals' beliefs about how they should behave [16]. When individuals act in accordance with these rules, they experience a strong sense of pride. Usually, if a personal norm is violated, they feel guilty [16]. According to Nguyen et al. [25], personal norms make a significant and direct contribution to predicting the intention of selective collection directly from the source. Trust Trust is a set of behaviors that reflect the expectations of individuals they have from different entities-in this case from the authorities-and regards the means through which the entities succeed in managing the waste situation [28]. In the study conducted by Nguyen et al. [25], trust is the variable that makes the greatest contribution to predicting the intent of selective collection of the Hanoi community. Environmental Knowledge Environmental knowledge refers to the knowledge, information and skills necessary for individuals to properly sort waste, to understand its impact on the environment, along with their knowledge of selective collection programs, collection stations and infrastructure provided by the authorities [34]. According to Meng et al. [34], environmental knowledge has a significant impact on consumer behavior in China. Collection Infrastructure Selective collection infrastructure refers to the access of individuals to collection stations, their condition, their physical storage capacity and the number of the available collection stations [34]. It has been determined that selective collection infrastructure has a significant role in determining consumer behavior [34]. Research Methods and Tested Hypotheses Various approaches have been used for highlighting the connection between the considered determinants and the consumers' behavior. With all these, two methods have been extensively employed in the research literature associated to this field: regression analysis [15,18,21,32,41] and factor analysis [14,17,22,26,29,34,39,42,43]. As for the tested hypotheses, Rosenthal [35] tested seven hypotheses which state that intention is positively related to procedural information seeking; procedural information seeking positively influences the recycling behavior; procedural information seeking mediates the relationship between recycling intention and recycling behavior; procedural information seeking has a positive impact on recycling-related behavioral control; behavioral control mediates the relationship between information seeking and recycling behavior; higher procedural information seeking increases the positive relation between recycling intention and recycling behavior and behavioral control mediates the moderation effect [35]. Among the considered hypotheses, all but one (related to the mediation role of procedural information seeking on the positive relation between recycling intention and recycling behavior) have been accepted. Nguyen et al. [32] tested five hypotheses. Based on the analysis, the authors rejected only one hypothesis related to the positive relationship between personal norms and recycling behavior. Based on the results, a positive relationship was observed between attitude, social norms, global warming and recycling behavior, while recycling behavior scored a negative relationship with recycling inconvenience. Yuan et al. [29] considered ten hypotheses. Nine of the hypotheses have been validated; the one not validated referred to the fact that an individual who has a positive attitude regarding the selective collection of food waste is more likely to adopt this behavior [29]. Among the validated hypotheses we mention the existence of a positive relationship between: perceived behavioral control and behavior, subjective norms and behavior, attitude and behavior, personal norms and behavior, social norms and attitude and negative relationships between attitude and denial of responsibility, denial of responsibility and behavior [29]. The study conducted by Tonglet et al. [15] presents a model in which attitude, social norms and control of perceived behavior influence the intention of selective collection, and intention influences behavior. The results of the study showed that a positive attitude towards selective collection was the best predictor of behavior, and it was also the strongest factor correlated with the intention of selective collection [15]. Wang et al. [40] carried out a study focused on the willingness of consumers to engage in on-line recycling behavior. The authors tested the following hypotheses: attitude positively and significantly influences residents' desire to participate in selective on-line e-waste collection; subjective rules positively and significantly affect residents' desire to participate in online recycling of e-waste; perceived behavioral control positively and significantly influences residents' desire to participate in online recycling of e-waste [40]. Additionally, a positive relationship was tested between economic motivation, income level and level of education on the willingness to participate in an e-waste on-line recycling process. The last hypotheses tested were the moderation of the willingness by the level of education or the level of the resident's income. Across the eight hypotheses, all have been supported, except for the hypothesis referring to the moderate role of the education level between the subjective norms and the willingness of the residents to participate in e-waste on-line recycling. Valle et al. [16] considered fourteen hypotheses. Twelve of them have been supported: social norms and control of perceived behavior positively influence behavior, knowledge of the environment and the convenience of using the collection service logistically positively influence perceived behavioral control, social norms positively influence personal norms, the effect of social norms on behavior is mediated by personal norms, the positive influence of personal norms on behavior is stronger for those who have a positive attitude about selective collection. A positive attitude towards ecology has a positive influence both for the general opinion about recycling and for the perceived behavioral control. Hypotheses regarding the positive influence of individuals' personal values on perceived behavioral control and general opinion about recycling were also tested. Another hypothesis tested and validated was that of the positive influence of the communication strategy on environmental knowledge. The hypotheses that were not statistically validated stated the positive influence of the communication strategy on the perceived behavioral control and the attitude on the behavior [16]. Survey Design Based on literature review presented in Section 2 of this paper, a 53-question survey was created. The questions were designed starting from the questions used by the other authors who addressed selective collecting or recycling issues in their works. Some of these works are listed in the following: [14][15][16]21,22,[24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][40][41][42]. The questions can be divided into three main groups: 10 demographic and socio-economic questions, 1 control question in which the respondents were asked to name the items they have selectively collected in the past months and 42 questions designed for determining the impact of 11 determinants on the selective collection intention and behavior. Considering the literature, the following determinants have been broadly observed: Attitude, Perceived Behavioral Control, Intention, Environmental Knowledge, Awareness, Government Measures, Responsibility and Waste Collection Infrastructure, and were included in the questionnaire, along with some other elements less studied in the literature. As All the questions in these categories were valuated using a 5-point Likert scale with 1-strongly disagree, 2-disagree, 3-neutral, 4-agree and 5-strongly agree. The questionnaire, along with the distribution of the received answers, is presented in Appendix A. Distribution The questionnaire was created and hosted using Google Forms. The form was available for filling in between 21 April 2020-16 June 2020. All the questions were marked as "mandatory", except for the control question, ensuring in this way that there were no empty values in the database. No incentives were given for participation. A total of 711 valid questionnaires were filled in. Considering the control question, 16 persons did not add anything in the field associated with the question and they were eliminated from the sample. As a result, a total of 695 answers were kept for analysis, which is a reasonable number of observations [15,[21][22][23][25][26][27][29][30][31][32]34,39]. Questionnaire Analysis and Validation Data gathered through the questionnaire were analyzed and validated using IBM SPSS Amos 26.0.0 [56], by following the steps recommended in Byrne [57]. The structure of the components is presented in Figure 6. Considering the structure in Figure 6 the values for the goodness-of-fit (GOF) indicators listed in Tables 2-4 were obtained. Table 2 shows the values calculated for the minimum discrepancy weighted by the number of degrees of freedom (CMIN/DF). This value must be below the threshold of 5.00 [57,59]. In our case, CMIN/DF has the value of 5.282 which is higher than the threshold of 5.00. Comparative fit index (CFI), an incremental adjustment index, should exceed the threshold of 0.9 for the validation of the questionnaire [60]. In our case, the value of the index is 0.797 and suggests that an improvement model is needed. The values of the normed fit index (NFI), the incremental fit index (IFI) and the relative fit index (RFI) should exceed the threshold of 0.9. Considering the values in Table 3, it can be observed that the values for these indicators are below the imposed threshold value. According to Brown [61] and Byrne [57], additional analysis can be done by considering the values of the Tucker-Lewis index (TLI), which should be as close as possible to the value of 0.95. The initial model has a value of this index of only 0.762. 6. The individual components validation process-step 1. A confirmatory factor analysis [58] was used for questionnaire validation. The last indicators in this category are the square mean root of the approximate errors (RMSEA) and the lower limit values (LO 90) and upper limit (HO 90) for a confidence interval of 90%. Hu and Bentler [60] and Harrington [62] state that the RMSEA value must be less than 0.06 in order to indicate a good fit of the model. In our case, RMSEA records a value of 0.079, which is above the mentioned threshold ( Table 4). The confidence interval for RMSEA is between LO 90: 0.055 and HI 90: 0.081, and is within the threshold value of 0.085 imposed by Paswan [63]. Based on these results, along with the recorded values for the factor loadings, the structure of the model was improved as suggested by Sujata [37]. The new construct is presented in Figure 7. A new validation was conducted and the data in Tables 5-7 were obtained. Table 5 illustrates that the CMIN/DF is well below the imposed threshold value of 5.00, scoring 2.462. NFI, RFI, IFI and CFI exceed the threshold of 0.9, so we can say that the model is also valid in terms of these indicators ( Table 6). The TLI value exceeds the 0.95 threshold, supporting once more the validity of the model. RMSEA is below the threshold of 0.06 (Table 7), which signifies a good model fit accordingly, while the limits of the confidence interval (LO90 = 0.040 and Hi90 = 0.051) are below the imposed value of 0.085 suggested by Paswan [63]. Given the above, we can state that the validation of the model was successfully passed. Considering the constructions validated above, the model in Figure 8 was proposed. Construct reliability and convergent validity are tested using average variance extracted (AVE) and composite reliability (CR). The used software does not allow the automatic calculation of these indicators. We determined them manually using the factor loading listed Table 8 and the formulas provided by [64,65]: where n i = 1 λ 2 i represents the sum of the square of the values recorded for each factor, presented above, and n is the number of factors in each group, and n i = 1 δ i represents the sum of the standardized variance of the error, which is determined as the difference between 1 and the feasibility of each element. The values for AVE and CR are listed in the last two rows of the table (marked with grey). As it can be seen from the table, the CR values are higher than 0.7 for all variables considered in the analysis, which suggests good construct reliability. All the AVE values exceed the threshold of 0.5, which signifies that the model has good convergent validity. Given the above, we can say that the proposed model meets the criteria of feasibility and convergent validity. Hypotheses The hypotheses to be tested are : Hypothesis 1 (H1). The attitude towards selective collection has a positive impact on the intention of selective waste collection. Hypothesis 2 (H2). The reward that consumers can receive when collecting selectively has a positive impact on the intention to selectively collect. Hypothesis 4 (H4). Knowledge of the environment positively influences the intention of selective collection. Hypothesis 5 (H5). Responsibility positively influences the intention of selective collection. Hypothesis 6 (H6). The state/existence of the selective collection infrastructure positively influences the intention of selective collection. Hypothesis 8 (H8). Selective collection intention positively impacts selective collection behavior. The considered hypotheses are summarized in Figure 9. Demographic and Socio-Economic Characteristics The demographic and socio-economic characteristics of the respondents are presented in Table 9. Table 9. Demographic and socio-economic profile of respondents (n = 695 persons). Out of the 695 respondents, 79.71% are women, while 20.29% are men. The structure of the respondents on age shows that most of the respondents belong to the 21-30 years old (34.10%) and the 31-40 years old categories (36.67%). The residence of the respondents is predominantly urban, with a share of 81.29%; the rest of the people come from rural areas (18.71%). Regarding the occupation, it is observed that 29.64% of the respondents are students, and 39.42% are employed in the private sector. These two categories occupy a share of 69.06% of the total answers. The number of family members is 1-2 people for 46.19% of respondents, 3-4 people for 45.61% and more than 5 people for 8.2% of respondents. Monthly income was measured in the national currency and translated into euros. Among the respondents, 24.60% have a monthly income of less than 280 € (the minimum wage in the Romanian economy), 20% have an income that falls between 751-1000 €, 21.57% marked an income between 1001-1500 €. Additionally, there are respondents who fall into the following categories of monthly income: 281-400 € (10.50%), 1001-1500 € (12.22%) and over 1500 € (11.01%). Demographic and Socio-Economic Variables Among the most used social networks, Facebook is the most popular platform, with 61.87% of respondents marking it as the most frequently used social platform. Facebook is followed by Instagram, which is marked as the most used platform by 31.95% of respondents. These two social networks are followed by LinkedIn (2.01%) and Twitter (1.15%). Additionally, 3.02% of the respondents pointed out that they most frequently use another social network that is not found among those mentioned in the questionnaire. Selective Collection Behavior In the following, the answers received for the determinants considered in the paper will be discussed. The decision to provide an analysis related to the obtained results is in line with approaches in the literature. Depending on the situation, the authors have decided to present a complete analysis of the values of the variables or some cumulative results in terms of mean and standard deviation. Please consider the following papers for further reference: [22,24,25,28,35,36,38,41]. The purpose of analyzing the answers the respondents offered to questions is to shed light on the level of knowledge of the respondents related to each of the elements considered in the questionnaire and further analyzed using the structural equation model. We believe this analysis to be important as it provides more insight on the respondents' believes over the selected topics related to the selective collecting process. A similar analysis is performed by Byrne and O'Regan [24]. Attitude The answers to the questions corresponding to the variable that measures consumers' attitudes towards selective waste collection are highlighted in Figure 10. Based on the answers, it was observed that, in general, respondents have a good opinion of the selective collection process. A total of 533 respondents (76.69%) expressed their total agreement on the benefit of selective collection at the level of the whole society, while only 6 people (0.86%) expressed their disagreement and total disagreement. Practicing selective collection contributes to the protection of the environment according to 659 participants (94.82%) in the study, while only 9 of the respondents (1.29%) expressed their disagreement or total disagreement. A number of 494 respondents (71.08%) considered that the practice of selective collection reduces the amount of waste they produce, 123 respondents (17.70%) did not have an opinion on this statement, and 78 of the respondents (11.22%) did not agree with this idea. Also, 539 respondents (77.55%) stated that they enjoyed participating in selective waste collection, 41 respondents marked (5.90%) that were not interested in collecting selectively, while 115 people (16.55%) did not have an opinion related to this issue. Awareness The questions asked to determine the respondents' awareness of environmental issues had a positive result, most of the respondents being aware of their existence and magnitude. Figure 11 shows that the vast majority of respondents, 658 (representing 94.68%), said that they were aware of the benefits that selective collection has on the environment. On the other hand, only 12 respondents (1.73%) stated they did not agree with this aspect, while 25 respondents (3.60%) did not have any opinion on this idea. Regarding the increase in the amount of waste that ends up being revalued, it is observed that 586 respondents (84.32%) agree with this statement, while 26 disagree (3.74%). Out of the respondents, 11.94% (represented by 83 respondents) had no particular opinion regarding this statement. The vast majority of respondents (609 people), representing 87.63%, were aware that when they do not collect selectively, they contribute to environmental pollution and increase the amount of waste that reaches the landfill. Only 35 of the respondents (5.04%) stated that they did not agree with this fact, and 51 of the respondents (7.34%) did not have any opinion on this issue. The idea that all members of society must cooperate to solve the waste problem was approved by 662 of the respondents (95.25%), on the other hand, 10 respondents (1.44%) marked that they do not agree with this idea. Also, a number of 23 respondents (3.31%) are neutral on this issue. Environmental Knowledge Given the statements made to determine the level of environmental knowledge of respondents, it turned out that many of them are correctly informed. Thus, Figure 12 shows that 525 respondents (75.54%) know how to selectively collect correctly and know the types of waste that can be recycled. There were also 45 respondents (6.47%) who admitted that did not know how to collect selectively in a correct way, while 125 respondents (17.99%) did not express their opinion. At the same time, 612 people (88.06%) knew that common waste storage can cause contamination of recyclable waste, making it impossible to recycle. This information was not available for 32 respondents (4.60%), while 51 respondents (7.34%) were neutral about this statement. Regarding the idea that landfilling can affect the groundwater and thus human health, it is noted that 626 of the participants in the questionnaire (90.07%) had this information and agreed with this aspect, while 25 respondents (3.60%) did not agree with this idea. At the same time, 44 of the participants (6.33%) did not express any opinion on this issue. Waste Collecting Infrastructure Regarding the selective collection infrastructure, as shown in Figure 13, 649 respondents (93.38%) agreed that the collection centers must be properly managed, a smaller group of 11 respondents (1.58%) did not agree with this aspect, while 35 respondents (5.04%) did not express their opinion. A total of 558 respondents (80.29%) claimed that they would selectively collect all waste if the authorities provided a modern collection infrastructure. Additionally, 50 respondents (7.19%) stated that they would not collect selectively even under these conditions in which the collection infrastructure would be modernized, while 87 participants in the questionnaire (12.52%) were neutral about this idea. The idea that collection centers should be close to homes was supported by 541 respondents (77.84%), while only 36 respondents (5.18%) did not agree with this idea. At the same time, 118 respondents (16.98%) were neutral in this regard. Of the total number of 695 participants in the questionnaire, 652 (93.81%) argued that selective collection centers should not be a danger to human health and should be kept in safe conditions. A small group of 17 people (2.45%) did not agree that selective collection centers should be kept in good health, while 26 respondents (3.74%) had no opinion on this issue. Perceived Behavioral Control Perceived behavioral control was measured by responses to three questions related to the difficulty of selective collection, respondents' knowledge regarding recycling and the involvement of authorities in selective collection. A summary of the answers received for each question is provided in Appendix B. More than half of the respondents, 60.57%, indicated that they do not consider selective collection a difficult activity. On the other hand, 28.06% find selective collection to be a difficult activity. Of the total number of respondents, 22.59% did not express any opinion on the difficulty of the selective collection activity. Furthermore, regarding the involvement of the authorities in selective collection, 56.83% of the respondents disagreed with the idea that the authorities provide enough containers to collect selectively. Of the respondents, 24.75% agreed that the provided containers were enough for ensuring a proper selective collection process. Selective Collection Behavior Regarding the behavior of selective collecting, the participants in the questionnaire answered five questions related to this issue. The results are summarized in Appendix B. Half of the respondents agreed that they separate all the waste they produce, while 23.31% of them disagreed with this statement, resulting in the knowledge that they do not separate the waste produced. At the same time, 67.63% of the respondents did not agree with the statement that they do not selectively collect and throw all waste in the same container, while 18.99% of them indicated that they do throw all waste in the same container. Intention Respondents' intention to collect selectively was analyzed through three questions. A summary of the received answers is provided in Appendix B. Overall, the participants' answers to the questions belonging to this category were positive, with more than half of them intending to engage in selective collection. Thus, 74.82% of the respondents stated that they intend to get involved in this activity, even if it would not always be easy for them to do so, while 5.61% of the participants stated that they did not intend to do so. Furthermore, 19.57% of respondents did not express any opinion on this issue. Regarding the purchase of products whose packaging is 100% recyclable, 43.31% of respondents expressed their intention to purchase this type of product, 21.44% of them did not agree with this aspect, and 35.25% of participants did not express their opinion on this idea (Appendix B). Responsibility Regarding the awareness of the responsibility consumers have, the respondents mainly felt responsible for the amount of waste they generate, and which could be recycled, as well as for increasing the amount of waste that could be recovered (Appendix B). Therefore, 86.18% of respondents agreed that they feel responsible for collecting selectively in order to increase the amount of waste that could be recovered, while 3.74% of respondents said they did not feel responsible for this action. Also, 10% of the participants did not express their opinion on this issue. Regarding the amount of waste that could be recycled, but instead reaches the landfill, 85.18% of respondents feel responsible for it, being aware that their actions could reduce this amount of waste that is not capitalized on. On the other hand, 4.32% of respondents did not feel responsible for the waste that ends up in landfill, and 10.50% did not have an opinion on this issue (Appendix B). Reward In general, people are encouraged to engage in a particular activity when they receive rewards. In order to determine whether this is also the case with regard to selective collection, the answers of the questionnaire participants to the two questions related to this topic were analyzed (Appendix B). Thus, 39.71% of respondents did not agree that they would collect selectively if selective collection programs involved a financial reward, and 30.94% of them claimed that a reward would stimulate them to collect selectively. At the same time, 54.24% of respondents agreed that they are more likely to collect selectively if they received a discount on the supermarket tax receipt for returned packaging. However, 24.31% of respondents did not agree with this issue, and 21.44% did not express any opinion on this idea (Appendix B). Convenience, Government Measures, Social Norms and Taxation The answers received regarding these categories are presented in Appendix B. We have decided not to discuss them in the body of the paper as these constructions do not belong to the model nor to the hypotheses to be tested. Structural Model Results Based on the research hypotheses, the structural model was run. Table 10 summarizes the decision taken with regard to each of the formulated hypotheses. As can be seen in Table 10, from the 8 hypotheses we considered, 6 hypotheses were validated at different levels of significance. The variables that had a significant and positive impact on selective collection Intention are: Perceived Behavioral Control, Responsibility, Selective Collection Infrastructure. Two of the determinants, represented by Attitude and Environmental Knowledge, did not receive significant values in order to be validated. The Selective Collection Behavior is positively influenced by both considered variables, namely Awareness and Intention in the case of the Romanian participants. Based on the answers received for Awareness, it can be stated that most of the respondents have the necessary knowledge regarding the importance of cooperation among the members of society for solving the waste issue. Even more, as the influence of Awareness on the selective collecting behavior is acknowledged as significant, increasing the population's awareness will generate a positive outcome for the overall collective selection process. Considering the literature, we can observe that the positive impact of Reward has also been demonstrated in the research conducted by Wang [40] and Amini [23]. Considering the results, it can be observed that Responsibility has a positive and significant impact on Intention, similar to the study conducted in Romania regarding the e-waste recycling behavior [42]. Another validated result, represented by Waste Collection Infrastructure's positive impact on Intention, is in line with the research conducted by Nduneseokwu et al. [31]. Perceived Behavioral Control has a positive and significant impact on the Intention of selective collection and it is in line with studies by Yuan [29], Boldero [13], Wang et al. [40], Mahmud and Osman [17], Valle et al. [16], Halder and Singh [43]. Both hypotheses regarding the variables that influence behavior were validated at a high level of significance. Even in this case, the results are in line with the ones from the literature in which Awareness has been shown to have a significant and positive impact on Selective Collection Behavior as demonstrated in the study conducted by Nguyen [39], while the Intention of selective collection has been determined to have a significant impact on Behavior, as highlighted in Sujata et al. [37], Delcea et al. [42] and Strydom [36]. The "not supported" hypotheses are those regarding the positive impact of Attitude and Environment Knowledge on the Intention of selective collection. Considering the cumulative answers to the questions included in the three constructions regarding Environmental Knowledge, Attitude and Intention, it can be observed that most of the respondents marked a strong knowledge related to how storing waste on the ground can affect groundwater and human health, and that storing waste in an improper way can lead to the impossibility of recycling the waste; however, they marked a large amount of answers in the "neutral" area when asked about their intention to actively participate in the selective collection initiative or their intention to buy products with 100% recyclable package. The same situation happened even in the Attitude case. Even though the attitude score relative overall good marks for the first two questions, the last two questions provided a lot of neutral answers related to the feeling the respondents have as a result of the selective collecting process or to the reduction that selective collection provides in terms of produced waste. As the female-male respondent ratio was unbalanced in favor of the females participating in the study, further analysis was conducted by dividing the respondents in two groups based on their gender. With these two groups, the structural model was run for testing the eight hypotheses. The results in Table 11 were obtained. Based on the data in Table 11, small differences among genders were observed for the H2 and H7 hypotheses. For both involved influencing factors, namely Awareness and Reward, it was determined that the significance degree is decreased for Awareness and increased for Reward in the case of the males compared to the females. As a result, it can be stated that the positive relation between Reward and Intention is stronger in the case of males. Looking closer at the answers offered by the males respondents, it can be seen that the Reward was regarded more as a determining factor than in the case of females; this might be due to a more practical approach men can have over the income and earnings. On the other hand, as females are more likely to engage in the households' domestic activities, it is possible that the Reward to be seen as a less important factor. On the other hand, the relationship among Awareness and Selective Collection Behavior has a smaller significance degree, which might be also due to the fact that males are less involved in the household activities. This observation is in line with the study conducted by Arcury [66]. Considering the results in Table 11, it can be stated that no significant difference is reported based on the respondents gender, with both males and females presenting a positive relationship among the considered variables, except for Attitude and Environment Knowledge where no conclusion can be made. Conclusions The present paper tries to analyze the determinants for the waste selective behavior in the case of Romanian citizens. Based on the literature, a series of possible variables were identified and a questionnaire was created in order to extract the variables' influence on selective collection intention and behavior. As a result of the questionnaire validation process, nine variables remained in the study for which eight hypotheses have been tested. After the hypothesis testing process, six of the formulated hypotheses were supported at different significance levels. Among the most influencing factors for the consumers' intention to engage in selective waste collection, one can name the perceived behavioral control, reward, responsibility and waste collection infrastructure. Considering the determinants for the selective waste collection behavior, namely the intention and awareness, one can identify some of the steps to be taken in order to increase the consumers' participation in selective collection actions. As a result, boosting the consumers' influence by applying proper reward policies or providing adequate waste collection infrastructure is determined to further have a positive and significant influence on the outcome, represented by the consumers' enhanced engagement in selective waste collection. Nevertheless, creating promotion campaigns through which the consumers' responsibility is enhanced can produce a positive outcome on the consumers' behavior. Considering the literature, it has been observed that awareness campaigns have a moderate influence on the improvement of the separate collection rates [67]. As the respondents signaled the need for a more modern selective collection infrastructure, and given the influence of its presence on the selective collection behavior, decision-makers can consider allocating more resources to such activities. Furthermore, offering the proper means that would ease the selective collection process and create selective collection centers that are close to the residential areas might lead to a positive outcome. Being aware of the characteristics and the incentives the consumers consider when engaging in selective collection behavior, the companies can re-think their business approach and can try to build a brand identity more oriented towards supporting the green and environmental protective activities [68]. The study has limitations related to the size of the sample and to the fact that the respondents had the needed skills to fill in a questionnaire in an online environment. Further development of the study includes extracting the levels of the reward that would motivate the consumers to engage in such activities along with the distances related to what the respondents believe as being a reasonable distance to selective collection bins. Beside these two elements, other characteristics related to the selective collection infrastructure that might be considered by the consumers when deciding to collect selectively the waste can be extracted and included in an agent-based model. Based on this model, different scenarios can be simulated in order to assist the decision makers in substantiating their decisions and actions, with would result in stimulating consumers' participation in selective waste collection. Funding: This work received no funding. Conflicts of Interest: The authors declare no conflict of interest.
2020-08-20T10:09:31.391Z
2020-08-12T00:00:00.000
{ "year": 2020, "sha1": "27e508cfc08e7a20295567eac01217c882e19481", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/12/16/6527/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "1d2a4887fadfb907944c27d86a06e3dc7a746053", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
221800321
pes2o/s2orc
v3-fos-license
Ermakov-Lewis Invariant for Two Coupled Oscillators We show that two coupled time dependent harmonic oscillators with equal frequencies have an invariant that is a generalization of the Ermakov-Lewis invariant for the single time dependent harmonic oscillator. Introduction Time dependent harmonic oscillators arise in several branches of physics, from classical mechanics to quantum mechanical systems such as optical trapping of atoms and molecules [1,2,3,4]. The wavefunction of time independent harmonic oscillators has been reconstructed in ion-laser interactions [5] and quantized fields in cavities [6], and ways to overcome the effects of harming environments, that cause decoherence effects and therefore the destruction of nonclassical states of quantum systems, have been put forward [7,8]. The extensions from single harmonic oscillators to coupled time dependent harmonic oscillators may be found in ion-laser interactions [1,2,3], quantized fields propagating through dielectric media [9], shortcuts to adiabaticity [10], the Casimir effect [11] to name some. Constants of motion are of central importance in the study of dynamical systems. In particular, invariants in mechanical systems for time dependent Hamiltonian have attracted considerable interest over the years [12] and, in particular, time dependent harmonic oscillators (TDHO) have attracted attention due to their applications in several areas of physics [13,14]. The most relevant cases are the linear potential [15], that may be produced in classical optic [16], and the quadratic spatial dependence that leads to the quantum mechanical time dependent harmonic oscillator (QM-TDHO) [17]. On the other hand, the simple extension to two coupled time dependent harmonic oscillators has been considered and its solution presented by Macedo and Guedes for a very limited case of time dependent functions [18]. This approach has been improved by the use of transformations that use orthogonal functions invariants [17] that simplifies the coupled harmonic oscillators Hamiltonian [19]. The QM-TDHO has been solved under various scenarios such as time dependent mass [20,21] and applications of invariant methods have been used in adiabatic regimes [22] for the control of quantum noise [23] and the propagation of light in waveguide arrays [24,25,26,27,28,29]. [30,31]. The main purpose of the present contribution is to obtain a generalization of the single harmonic oscillator Ermakov-Lewis invariant [32] to the case of two coupled harmonic oscillators, when the frequencies associated to the individual oscillators are equal. Although Thylwe and Korsche [33] have given a 'Ermakov-Lewis invariant' for the case of N coupled oscillators [34], we give here an invariant, for the two oscillators case, that has the same form as the one introduced by Lewis [32], and therefore, an auxiliary function that obeys Ermakov equation is needed in the invariant. Lewis invariant In the sixties, Lewis introduced a quantity called the invariant of the form (throughout the manuscript we will seth = 1)Î with ρ x an auxiliary function that obeys the Ermakov equation and that, consequently, takes the name Ermakov-Lewis invariant. The operator given in (1) is invariant in the sense that withĤ x the Hamiltonian for a (one-dimensional) single time dependent harmonic oscillator We may note in the invariant (1) two main ingredients: a displacement in momentum by the position operator and the amplification or de-amplification of position or momentum operators, i.e., squeezing [35,36,37,38]. These ingredients, converted into unitary transformations, givê were we used to write a solution to the TDHO Hamiltonian [17]. In the above equation, by taking f (t) =ρ x ρx and g(t) = ln ρ x , the time dependence in the Hamiltonian (4) may be factorized yielding an easy way to write the evolution operator for the transformed Hamiltonian. On the other hand, if the functions f (t) =u x ux and g(t) = ln u x are used, where the auxiliary function is solution of the classical equation of motion the term proportional tox 2 is eliminated from the Hamiltonian, i.e., yielding a free particle [17]. A relation between the auxiliary functions that simplify the Hamiltonian (4) may be obtained as such that, once we may solve equations (2) or (6), we may find the other auxiliary function. Coupled time dependent harmonic oscillators We consider the time-dependent Hamiltonian for two interacting harmonic oscillators (we set the masses equal to one) with κ's being the time-dependent spring parameters. By setting and Ω 2 y (t) = κ y (t) + κ(t) (10) and η(t) = −2κ(t), such that we end up with a Hamiltonian of the form whose classical equations of motion arë If we consider the case of equal frequencies Ω x (t) = Ω y (t) = Ω(t), the Hamiltonian above reduces toĤ It is not difficult to show that for this Hamiltonian it exists an invariant of the form because and such that ∂Î xy ∂t + [Î xy ,Ĥ xy ] = 0. The invariant (14) has the same structure of the Ermakov-Lewis invariant (1) for the single time dependent harmonic oscillator. It is easy to prove that the auxiliary function, ρ, obeys the Ermakov equationρ Solution for equal frequencies We now consider the time-dependent transformation [17] T that produces the set of transformed quantitieŝ where v is the solution of the equation If we transform the wave function with the transformation above, i.e., the Schrödinger equation needs to be transformed. We do it by substituting (22) into (23) which gives By noting that such that from (21), we may rewrite the Schrödinger equation as Now, performing a second transformation, |φ θ =R θ |φ u , withR θ = exp[iθ(xp y −ŷp x )] and by setting θ = π/4 we obtain the integrable equation In the above equation we may see that we have a time dependent harmonic oscillator Hamiltonian in the variable y, while a free particle in the variable x, whose solutions are well known [39]. Conclusions We have shown that the Ermakov-Lewis invariant for the (one-dimensional) single time dependent harmonic oscillator can be generalized to the two-coupled harmonic oscillators case when equal time dependent frequencies are considered. The Invariant keeps the same structure as well as the Ermakov equation that needs to be solved, replacing the time dependent frequency, Ω 2 (t), by an effective frequency, Ω 2 (t) + η(t).
2020-06-25T09:06:07.909Z
2020-04-01T00:00:00.000
{ "year": 2020, "sha1": "117fe52746c6c30dc8c879a808055b684ca03f87", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1540/1/012009", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "9625c37c991ce6842dd8b8a15301b4e8441a4f19", "s2fieldsofstudy": [ "Physics", "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
211518783
pes2o/s2orc
v3-fos-license
Exploring the RNA-Recognition Mechanism Using Supervised Molecular Dynamics (SuMD) Simulations: Toward a Rational Design for Ribonucleic-Targeting Molecules? Although proteins have represented the molecular target of choice in the development of new drug candidates, the pharmaceutical importance of ribonucleic acids has gradually been growing. The increasing availability of structural information has brought to light the existence of peculiar three-dimensional RNA arrangements, which can, contrary to initial expectations, be recognized and selectively modulated through small chemical entities or peptides. The application of classical computational methodologies, such as molecular docking, for the rational development of RNA-binding candidates is, however, complicated by the peculiarities characterizing these macromolecules, such as the marked conformational flexibility, the singular charges distribution, and the relevant role of solvent molecules. In this work, we have thus validated and extended the applicability domain of SuMD, an all-atoms molecular dynamics protocol that allows to accelerate the sampling of molecular recognition events on a nanosecond timescale, to ribonucleotide targets of pharmaceutical interest. In particular, we have proven the methodological ability by reproducing the binding mode of viral or prokaryotic ribonucleic complexes, as well as that of artificially engineered aptamers, with an impressive degree of accuracy. INTRODUCTION Ribonucleic acid (RNA) is a polymer whose biological importance has increased progressively over the last 50 years. Despite the central dogma of molecular biology considering this nucleic acid simply as a functional messenger between DNA genetic information storage and protein biosynthesis, RNA has recently been reappraised as an ancestral molecule of primary importance in the abiogenesis process. At the origin of life, RNA probably encompassed both an informational role, which progressively evolved toward involving the more stable and easily replicable DNA polymer, and a catalytic function, which was gradually flanked by more versatile proteins (Morris and Mattick, 2014). The complexity hiding behind RNA's biological functions is intuitable by taking into consideration the human organism, which genetic heritage could quite entirely be transcribed into RNA, despite coding only in a minimal portion (about 3%) for proteins (Warner et al., 2018). A great majority of these transcripts therefore remain untranslated, originating non-coding genomic portions. RNA revolution has thus shed light on the regulatory activity of this widely different class of macromolecules that, along with some proteins, cooperate to control and finely orchestrate the genome expression (Connelly et al., 2016). RNA polymer lengths range from small hairpins composed of a few tens of nucleobases to long non-coding RNAs sequences (lncRNAs) that can reach up to a few thousands nucleotides (Connelly et al., 2016). Differently from DNA, RNA usually exists as a single-stranded molecule that is not strictly limited by a Watson-Crick base pairing. In solution, ribonucleic acids explore a wide landscape of three-dimensional structures, characterizable by the presence of peculiar functional domains able to specifically recognize other nucleic acids, polypeptides, glyco-derivates, or cognates of small organic molecules (Draper, 1995;Cruz and Westhof, 2009;Salmon et al., 2014;Flynn et al., 2019). From a topological point of view, the tertiary and quaternary structures that distinguish ribonucleic acids from their deoxyribonucleic counterpart make them more similar to proteins, a consideration that has paved the way for an attempt to pharmacologically modulate their biological functions through the discovery of small molecules interacting with RNA (SMIRNA) (Sucheck and Wong, 2000;Connelly et al., 2016). Interestingly, in recent research work, it has been estimated that pharmacologically modulating RNA would allow us to expanding-by more than an order of magnitude-the universe of targetable macromolecules, and this would thus considerably extend the portion of the druggable genome (Ecker and Griffey, 1999;Warner et al., 2018). Although RNA has been historically considered as an "undruggable" pharmaceutical target, the discovery that many drugs of undeniable therapeutic importance, especially antibiotics, act at this level has attracted the interest of the scientific community, resulting in greater effort being made toward the development of new tools for this purpose (Donlic and Hargrove, 2018;Disney, 2019). Furthermore, the orthogonality characterizing RNA homologous transcripts belonging to virus, prokaryote, and eukaryote genomes make RNA an interesting target for the purpose of achieving selectivity, especially in the field of anti-infectives compound development (Ecker and Griffey, 1999;Connelly et al., 2016). All these aspects, therefore, make the discovery of SMIRNAs extremely intriguing. A first pioneering approach to rationally design new RNAtargeting compounds, simply starting from the knowledge of the oligonucleotide sequence of pathological interest, was developed by the Disney research group and was successfully applied to a plethora of expanded repeating RNAs that are known to cause microsatellite disorders (Velagapudi et al., 2014;Disney et al., 2016). In addition, the quantitative structure-activity relationship (QSAR) model and chemical similarity search were initially exploited to in-silico identify or optimize new chemical probes targeting RNA . Since X-ray crystallography, NMR spectroscopy and, recently, Cryo-EM techniques have unveiled with an atomistic level of detail a multitude of three-dimensional RNA structures, the scientific community has begun to evaluate the applicability of structurebased drug design strategies (SBDD). These approaches, until now mainly validated on proteins targets, could enhance the rational design of SMIRNAs. Molecular docking represents one of the electives of in silico techniques, exploited both in the academic and industrial world, to accelerate the discovery and optimization of new drug candidates by evaluating the putative small molecules' binding mode and providing a way to perform a ranking of vast compound libraries. There are however many peculiarities of ribonucleic acids that affect both performance and accuracy of docking protocols, and this makes its application challenging. The polyanionic backbone of RNA determines a peculiar charge distribution on the polymer surface-quite different from the one characterizing proteins-to which the scoring functions were traditionally calibrated (Disney, 2019). Furthermore, docking protocols do not explicitly consider the role of solvent during the molecular recognition process, whereas structural data have highlighted how water molecules can stabilize RNA-ligand complexes, often mediating hydrogen bonds networks (Fulle and Gohlke, 2009). However, the aspect that mostly affects RNA-docking accuracy is the flexibility and the dynamic behavior characterizing ribonucleic acids, which are usually neglected by docking algorithms, thus limiting the discovery of compounds targeting a narrow region of the conformational space (Hermann, 2002;Fulle and Gohlke, 2009;Disney et al., 2014). An attempt to overcome these limitations was conducted by Stelzer et al., who performed a docking-based virtual screening on an RNA dynamic ensemble constructed by combining molecular dynamics simulations (MD) with NMR spectroscopy and reported the discovery of six molecules able to bind HIV-1 TAR with quite good affinity. MD simulations would represent a valuable computational tool with which to investigate different ligand-RNA recognition processes, fully considering both target flexibility and the solvent presence. Interestingly, molecular mechanics force fields (FF), such as AMBER or CHARMM, were revisited and refined during the last year to improve ribonucleotide simulation accuracy (Pérez et al., 2007;Denning et al., 2011). Nevertheless, the use of MD is mostly limited to the fluctuation exploration in the post-docking procedure since ligand-target associations are rare events that can be sampled only through long-timescale computationally expensive simulations. An implementation of classical MD, called supervised molecular dynamics (SuMD), was recently developed in our research group. SuMD is able to speed up the exploration of the ligand-receptor recognition pathways on a nanosecond timescale through the implementation of a tabu-like supervision algorithm (Sabbadin and Moro, 2014). The protocol was so far validated in different scenarios, including ion-protein, ligand-protein, and peptide-protein bound complexes, proving that it could reproduce the experimental determined final state with great geometric accuracy (Cuzzolin et al., 2016;Salmaso et al., 2017;Bissaro et al., 2019). In this work, SuMD simulations were applied for the first time to investigate the recognition mechanism involving ribonucleic acid macromolecules with the aim to extend the methodology applicability domain. This pilot study, which provided encouraging results, took into account a plethora of different ribonucleic complexes of pharmaceutical interest, the three-dimensional structures of which are known and available on the Protein Data Bank archive (Berman et al., 2000). SuMD methodology proved its ability in describing, with a reduced computational effort, the whole process of ligand-RNA recognition (from the unbound to the bound state), independently by the target topological complexity. As far as we know, this represents the first attempt to overcome methodological limitations within molecular docking when applied to ribonucleic acids, describing binding events through an all-atoms MD-based approach. This study confirms the possible use of SuMD as an innovative computational tool that can accelerate the discovery of new drug candidates and with peculiar attention to SMIRNAs. Software Overview MOE suite (Molecular Operating Environment, version 2018.0101) was used to perform most of the general molecular modeling operations, such as RNA and ligand preparation. All these operations have been performed on an 8 CPU (Intel R Xeon R CPU E5-1620 3.50 GHz) Linux workstation. Molecular dynamics (MD) simulations were performed with an ACEMD engine (Harvey et al., 2009) on a GPU cluster composed of 18 NVIDIA drivers whose models go from GTX 980 to Titan V. For all the simulations, the ff14SB force field with χ modification tuned for RNA (χOL3) was adopted to describe ribonucleic acids, while a general Amber force field (GAFF) was adopted to parameterize small organic molecules (Wang et al., 2006;Sprenger et al., 2015;Tan et al., 2018). Structures Preparation The three-dimensional coordinates of each RNA-SMIRNA complex investigated were retrieved from the RCSB PDB database and prepared for SuMD simulations as herein described (Cuzzolin et al., 2016). For structures solved by NMR, which contain multiple conformations of the same complex, the one with the lowest potential energy (usually the first) was selected and then used. All complexes were then processed by means of an MOE protein structure preparation tool: missing atoms in nucleotide bases were built according to AMBER14 force field topology. Missing hydrogen atoms were added to X-Ray-derived complexes, and appropriate ionization states were assigned by Protonate-3D tool (Labute, 2009). Ligand coordinates (both small molecules and peptides) were moved at least 30 Å away from RNA binding cleft, a distance bigger than the electrostatic cut-off term used in the simulation (9 Å with Amber force field) to avoid premature interaction during the initial phases of the SuMD simulations. Solvated System Setup and Equilibration Each system investigated by means of SuMD contained an RNA target macromolecule, and the respective ligand, which was a SMIRNA or a peptide, moved far away from the binding site as previously described. The systems were explicitly solvated by a cubic water box with cell borders placed at least 15 Å away from any RNA/ligand atom, using TIP3P as a water model. To neutralize the total charge of each system, Na + /Cl − counterions were added to a final salt concentration of 0.154 M. The systems were energy minimized by 500 steps with the conjugate-gradient method, then 500,000 steps (1 ns) of NVT followed by 500,000 steps (1 ns) of NPT simulations were carried out, both using 2 fs as time step and applying harmonic positional constraints on RNA and ligand heavy atoms by a force constant of 1 kcal mol −1 Å −2 , gradually reducing with a scaling factor of 0.1. During this step, the temperature was maintained at 310 K by a Langevin thermostat with low dumping of 1 ps −1 and the pressure at 1 atm by a Berendsen barostat (Berendsen et al., 1984;Loncharich et al., 1992). The M-SHAKE algorithm was applied to constrain the bond lengths involving hydrogen atoms. The particle-mesh Ewald (PME) method was exploited to calculate electrostatic interactions with a cubic spline interpolation and 1 Å grid spacing, and a 9.0 Å cutoff was applied for Lennard-Jones interactions (Essmann et al., 1995). Supervised Molecular Dynamics (SuMD) Simulations Molecular dynamics simulations represent a well-validated computational tool that, through the numerical solution of the Newton equation of motion, makes it possible to describe the time-dependent evolution of a molecular system. Despite the impressive temporal resolution characterizing the technique, to capture pharmaceutically relevant events, such as the molecular recognition between a drug and its biological target, huge computational efforts are required. The SuMD protocol instead improves the efficiency with which a binding event is sampled, from a microsecond to a nanosecond timescale, by applying a tabu-like algorithm. In detail, short (600 ps long) unbiased MD trajectories are collected, and these monitor, during the entire simulation, the distance between the ligand center of mass with respect to the ribonucleic acid binding site; then, those distance points are fitted into a linear function. Only productive MD steps in which the computed slope is negative are maintained, thus indicating a ligand approach toward the RNA binding site. Otherwise, the simulation is restarted by randomly assigning the atomic velocities from the previous set of coordinates. The supervision algorithm controlled the sampling until the distance between the ligand and the ribonucleic binding site dropped below 5 Å, at which point it was disabled, and a short classical MD simulation was performed, allowing the system to relax. For each case study, up to a maximum of 10 SuMD binding simulations were collected, of which only the best was thoroughly analyzed and discussed in the manuscript. A detailed report on SuMD protocol performance can be found in the Supplementary Material. The three-dimensional RNA structures investigated in this study, along with the nucleotides selected for the computation of the respective binding cleft center of mass, are reported in Figure 1. In this implementation, the SuMD code is written in FIGURE 1 | The case studies selected for the SuMD methodological validation are herein summarized and subdivided into RNA of viral origin, prokaryotic origin, or artificially engineered aptamers. For each complex investigated, the three-dimensional structure is depicted, representing with a green color the reference ligand, together with the nucleobases selected to define the binding site position in the SuMD simulations. Finally, the chemical structures of each ligand are reported, along with the experimental datum of binding affinity. In the case of the peptide, the primary sequence is reported, highlighting the basic residues constituting the arginine reach motif (ARM) in a blue color. Python programming languages and exploits the ProDy python package to perform the geometrical ligand-target supervision process (Bakan et al., 2011). SuMD Trajectory Analysis All the SuMD trajectories collected were analyzed by an inhouse tool written in tcl and python languages, as described in the original publication (Salmaso et al., 2017). Briefly, the dimension of each trajectory was reduced, saving MD frames at a 20 ps interval; each trajectory was then superposed and aligned on the RNA phosphate atoms of the first frames and wrapped into an image of the system simulated under periodic boundary conditions. The geometric performance of SuMD methodology was evaluated, and it computed the ligand RMSD (Root mean square deviation) along with the entire simulation with respect to the experimental resolved three-dimensional complex. Furthermore, the RMSD of RNA structures were computed on the P atoms of the backbone and plotted over time, and these can be viewed in the Supplementary Figures S1-S6A. A ligand-RNA interaction energy estimation during the recognition process was calculated using an MMGBSA protocol, as implemented in AMBER 2014, and it plotted MMGBSA values over time (Miller et al., 2012). The MMGBSA values were also arranged according to the distances between ligand and ribonucleic target mass centers in the Interaction Energy Landscape plots (Supplementary Figures S1-S6B). Here, the distances between mass centers are reported on the x-axis, while the MMGBSA values are plotted on the y-axis, and these are rendered by a colorimetric scale going from blue to red for negative to positive energetic values. These graphs allow for the evaluation of the variation of the interaction energy profile at different ligand-RNA distances; this helps to individuate metastable binding states during the binding process. Furthermore, for each target investigated in this work, the nucleotides within a distance of 4 Å from the respective ligand atoms were dynamically selected to qualitatively and quantitatively evaluate the number of contacts during the entire binding process. The most contacted nucleotides were thus selected, to compute a per-nucleotide electrostatic and vdW interaction, and energy contribution, with the ribonucleic target. NAMD was used for post-processing computation of electrostatic interactions using an AMBER ff14SB force field. The cumulative electrostatic interactions were computed for the same target nucleotides by summing the energy values frame by frame along the trajectory, and the resulting graphs were reported to the lower-right of movies provided as Supplementary Videos 1-6. Representations of the molecular structures were prepared with VMD software (Humphrey et al., 1996). RESULTS AND DISCUSSION To investigate the SuMD applicability domain and accuracy in the context of ribonucleic acid molecular recognition, a retrospective validation approach was selected, and it stressed the computational methodology ability in geometrically reproducing experimental binding modes of SMIRNA or small folded peptides. The three-dimensional structures of six ligand-RNA complexes solved both through X-Ray and NMR spectroscopy were retrieved from the RCSB PDB database and prepared for subsequent SuMD simulations moving ligands far away from the ribonucleic binding clefts, as accurately described in materials and methods section. The RNA structures, reported in Figure 1, were selected to span a vast plethora of pharmaceutically interesting ribonucleic targets, which vary between being of viral and bacterial origin, up to artificially engineered aptamers. Furthermore, the selected structures provide an overview of different peculiar three-dimensional RNA motifs, from a small stem-loop to a riboswitch characterized by a complex architecture. The results collected through SuMD simulations are then reported herein along with the geometric and interactives analysis performed. A summary of all the statistics regarding the simulation performances are reported in the Supplementary Information. Targeting Viral RNAs (vRNAs) The discovery and design of new antiviral compounds targeting viral proteins are complicated by the enormous variability affecting these macromolecules, an aspect representing the core of the drug resistance phenomenon. On the other hand, lncRNA regions belonging to viral genomes, being less affected by genetic mutations and having no counterpart in human organisms, are becoming attractive pharmaceutical targets. Aminoglycosides, antibacterial drugs known to inhibit protein synthesis acting at the level of the prokaryotic ribosome, have proven to be promiscuous molecules that are also able to bind lncRNA structural elements of viral genomes (Bernacchi et al., 2007). This experimental evidence has paved the way for the discovery of drug-like small molecules able to inhibit the replication for a plethora of pathological viral diseases, such as human immunodeficiency virus (HIV), hepatitis C virus (HCV), severe respiratory syndrome coronavirus (SARS CoV), and influenza A virus (Hermann, 2016). Influenza a Virus Promoter Influenza A represents a group of viruses differing from virulence and pathogenicity profiles that all belong to the Orthomyxoviridae family. The Influenza A genome comprises eight negative-sense single-stranded RNA segments (vRNA) encoding for 13 proteins (Coloma et al., 2009). The 5 ′ -end and 3 ′end terminal portions of each vRNA segment in the physiological condition fold together in a partial duplex, forming an arrangement called a promoter, which controls RNA-dependent RNA polymerase (RdRp) recognition and, thus, genome transcription and replication (Desselberger et al., 1980). Since the promoter sequences are highly conserved among Influenza A viruses and marginally affected by genetic variation that can enhance the onset of drug resistance, they represent a promising pharmaceutical target. The Varani research group, exploiting an NMR-based fragments screening approach, has identified 6,7-dimethoxy-2-(1-piperazinyl)-4-quinazolinamine (DPQ) as a promising scaffold for antiviral drug development as it is able to bind the Influenza A promoter region with a low micromolar affinity (K d 50.5 ± 9 µM) and is also able to inhibit the virus replication in a comparable range of concentration (Lee et al., 2014). The SMIRNA binding mode was experimentally elucidated by means of NMR, as depicted in Figure 1, confirming DPQ recognition within the RNA major groove at the (A-A)-U internal loop level. The SuMD algorithm was then applied to this first case study, in an attempt to investigate the entire DPQ binding mechanism, stressing at the same time the methodology accuracy in reproducing the experimental solved complex. A first interesting aspect is represented by the reduced time window of 30 ns required to sample a putative molecular recognition event between DPQ and its ribonucleic target (Supplementary Video 1). This result is quite impressive, especially if compared with classical MD simulations, which otherwise would require extensive computational efforts. At the end of the simulation, as depicted in the Figure 2 graph, the SMIRNA has converged both from a geometrical and interactive point of view toward the NMR structure binding mode. The low RMSD min value of 2.6 Å, computed on DPQ heavy atoms, confirm, also in the case of nucleic acids, SuMD ability in predicting a reasonable binding hypothesis. This value must not be evaluated with excessive severity, having been calculated only with respect to one of the 16 conformations of the complex deposited on the PDB database. The solution NMR structure has indeed highlighted an important variability in the DPQ positioning within the RNA binding site, with an RMSD max , computed on ligand-heavy atoms of 1.4 Å. Moreover, this approach makes it possible to peek at the entire molecular recognition process and to not focus merely on the final state. Figure 2C reports a time-dependent analysis performed on the nucleotides most frequently contacted during the simulation, reporting their cumulative contribution to binding, which is defined as the sum of each nucleotide electrostatic and van der Waals (vdW) interaction energy. It is encouraging to note how the nucleotides that computationally have shown a primary role in stabilizing the DPQ complex (A9-A11 and C21-G24) also correspond to those that have experimentally experienced the greatest chemical shift perturbations during NMR experiments. In addition, as reported in Figures 2B,C and on Supplementary Figure S1, SuMD simulation allows us to decipher the different role played by aforementioned nucleotides, some of them (A9-A11) participating only during the early phases of SMIRNA recognition (until 10 ns) and the other (C21-G24) stabilizing the complex within the ribonucleic cleft (after 10 ns). These results appear even more interesting if we consider the high flexibility characterizing the small RNA duplex. Despite the reduced time window explored by SuMD methodology, the (E) Flexibility characterizing the RNA structure during DPQ binding event, binding clef dimension was monitored as the distance dynamically occurring between two key nucleotides (A8 and C21). structure has indeed shown a relevant RMSD max of 4.2 Å from the initial experimental coordinates (Figure 2D). In detail, after a few ns of simulation, the promoter duplex in the ligandfree form folds back on itself, and only DPQ binding allows the structure to return to the experimental linear conformation ( Figure 2E). The same behavior was coherently captured also by NMR experiments, which previously highlighted how the RNA helical axis curvature changes upon ligand binding, enlarging the dimension of the binding cleft (Lee et al., 2014). HIV-1 Rev-RRE Complex The human immunodeficiency virus of type 1 (HIV-1) is a retrovirus belonging to the Lentivirus family, and it is responsible for acquired immunodeficiency syndrome (AIDS). RNA-protein interactions play a fundamental role in controlling the HIV replication cycle and, consequently, virulence profile (Battiste et al., 1996). HIV-1 Rev, in particular, is a small regulatory protein that drives the nuclear export of unspliced and partially spliced viral mRNAs transcripts. Rev protein mediated its function, recognizing a purine-rich bulge within stem-loop IIb of the Rev response element (RRE), a highly structured mRNA region within an env intron (DiMattia et al., 2010). The minimal binding domain in the Rev protein is constituted by a short α-helix folded peptide, which contains an arginine-rich binding motif (ARM), a domain known to be important also for tat-TAR (trans-acting region) interactions in HIV. Harada et al., exploiting an in-vivo strategy, have identified a class of specific RNA-binding peptides able to target HIV-1 Rev-RRE complex. Specifically, RSG-1.2, an α-helical peptide of 22 amino acids, was selected among a combinatorial library and subsequently engineered, providing a 7-fold increase in binding affinity and a 15-fold increase in selectivity toward the ribonucleic target, further resulting in an in vivo ability to completely disrupt the RNA-Rev protein interaction (Harada et al., 1996(Harada et al., , 1997. The solution structure of an oligonucleotide portion derived from HIV-1 RRE-IIb stem domain in a complex with an RSG-1.2 peptide was solved through NMR, providing structural details about vRNA targeting by means of the small peptide (Gosser et al., 2001). We have therefore chosen this case study to validate SuMD performance in one of the most complex methodological scenarios, namely the molecular recognition between two highly flexible partners: a small α-helix folded peptide and a portion of ribonucleic acid. In addition, the predominant electrostatic component that both characterizes the RNA polyanionic backbone and the small polycationic peptide, which contain six Arg residues, makes the prediction of the binding mode even more complex. Despite the unfavorable premises, a few tens of ns proved to be sufficient for the SuMD protocol to sample a binding hypothesis for the RSG-1.2 peptide. During the simulation, as observable on Supplementary Video 1, the peptide was accommodated with the correct orientation within the HIV-1 RRE-IIb major groove reaching, as reported in Figure 3A, an RMSD min value Superimposition between the experimental NMR complex (PDB ID 1G70, green-colored peptide) and the SuMD conformation with lowest RMSD along the trajectory (orange-colored peptide). The nucleotides surrounding the binding site, along with R residues belonging to ARM, are reported. (C) Dynamic total interaction energy (electrostatic + vdW) computed for most contacted RNA nucleobase. of 4.3 Å, computed on Cα peptide atoms. Although the geometric accuracy is lower than the previous example, the SuMD simulation has allowed us to identify the main interactive hotspots stabilizing the complex. As hypothesized and confirmed by Figure 3C, the ARM motif plays a fundamental role in anchoring the RSG-1.2 peptide, with charged residue R 16, R17, and R18 mediating fork electrostatic interactions with the phosphate atoms of the ribonucleic backbone, in a coherent way with the experimentally solved structure. Furthermore, the analysis performed on the trajectory (Supplementary Figure S2) has highlighted the peculiar behavior of R14; its guanidinium side chain is deeply buried within the RNA groove, where, differently from the other charged residues, it stabilizes the peptide through a solvent-shielded hydrogen bond and vdW interactions, an aspect in great agreement with the experimental NMR data (Gosser et al., 2001). Targeting Prokaryotic RNAs In the last decades, the discovery that many aminoglycoside compounds clinically exploited to treat severe bacterial infections mediated their action by affecting the ribosome machinery confirmed the initial hypothesis of considering RNA, especially prokaryotic ones, as an appetible pharmaceutical target (Disney, 2019). However, the drugs that target ribosomes represent an exception, rather than a model: the abundance of ribosome macromolecules in the cytoplasmic compartment means, therefore, that even modest drug-binding affinity could result in acceptable therapeutic efficacy (Warner et al., 2018). Apart from ribosomes, a putative regulatory role of lncRNAs in bacterial systems has recently become increasingly clear. From a mechanistic point of view, it is possible to distinguish regulatory RNAs acting in trans, either by base-pairing with a complementary region in the target mRNA or by sequestration of an RNA-binding protein and regulatory sequences that, in contrast, are encoded as part of the mRNA for the gene they regulate, thus acting in cis (Sherwood and Henkin). Riboswitches, which are structured elements typically found in the 5 ′ untranslated regions (UTR) of mRNAs, represent an interesting example of the latter case (Tucker and Breaker, 2005). These RNA elements, through an aptameric portion, directly sense a physiological signal (ions, cofactors, or metabolites) and transmit the information to the gene expression machinery via a signal-dependent RNA conformational change (Sherwood and Henkin, 2016). The discovery that clinically approved antibacterial Roseflavin exerts part of its therapeutic action by binding the flavin mononucleotide (FMN) riboswitch, together with the increasing availability of structural data on riboswitches, has made these targets very interesting pharmaceutically (Pedrolli et al., 2012). S-Adenosylhomocysteine Riboswitch S-adenosyl-(L)-methionine (SAM) is a fundamental cofactor that serves as the primary methyl group donor in a large set of biochemical reactions. In bacteria, SAM homeostasis is so important to the point that at least six classes of RNA riboswitch regulatory elements have since now been characterized (Weinberg et al., 2010). Following SAM-mediated methylation, the by-product S-adenosyl-(L)-homocysteine (SAH) that is released, due to its high toxicity, must be readily degraded by SAH hydrolase (ahcY) enzymes. Recently, a new type of riboswitch was discovered, and it is able to sense and be responsible for the intracellular SAH concentration, upregulating the expression of ahcY enzymes in prokaryotes (Wang et al., 2008). The aptameric portion of the SAH riboswitch recognizes its cognate ligand with a quite high binding affinity of 32 nM and, surprisingly, also provides a discrete selectivity profile toward the original cofactor SAM (1,000-fold lower affinity), ensuring a fine regulation of the SAM/SAH metabolic cycle. The high-resolution crystal structure of the SAH riboswitch aptameric domain in complex with its cognate ligand was recently solved, elucidating the molecular basis for SAH substrate specificity (Edwards et al., 2010). This case study not only represents a pharmaceutical appealing prokaryotic RNA target but also provides the opportunity to stress the SuMD performance in a more complex binding site recognition, if compared to the simple duplex structures until now investigated. The SAH molecule indeed binds a small cleft located in the minor groove of the SAH riboswitch, which adopts an unusual "LL-type" pseudoknot conformation. Also, in this case, around 20 ns were sufficient for the SuMD protocol to sample a putative molecular recognition trajectory (Supplementary Video 3). In detail, as reported in Figure 4, after only a few nanoseconds, SAH reached the riboswitch binding cleft reproducing the crystallographic complex with a notable geometric accuracy (RMSD min 1.7 Å). Then, the ligand conformation remained stable until the end of the simulation. From an interactive point of view, as reported in Figure 4C and also in Supplementary Figure S3, the SuMD trajectory analysis correctly highlighted the stabilizing role played by nucleotide C16 and A29, among which the adenine core of SAH is intercalated, providing the greatest vdW interactions. In contrast, the electrostatic contribution to binding analysis has revealed a divergent situation. Indeed, nucleobase G15, mediating a hydrogen bond network with an SAH adenine scaffold, is responsible for a great stabilizing contribution, whereas nucleotide C46 has shown during the entire simulation an unexpected repulsive contribution. The reason for this can be found in the conformation sampled by SuMD for the SAH homocysteine terminal tail. As depicted by Figure 4B, the carboxylic moiety of the ligand spatially approaches the C46 pyrimidine carbonyl, whereas in the crystallographic structure (green representation), through a simple bond rotation, the interaction is instead mediated by the vicinal amino group. Curiously, the same research group also deposited on the PDB database a worst resolution structure of the complex under investigation (PDB ID 3NPN), reporting the same apparently energetic unfavored SAH conformation described by the SuMD protocol ( Figure 4B, circular window), thus validating the goodness of the sampling and the flexibility characterizing the ligand tail. Pre-queuosine 1 Riboswitch Pre-queosine 1 (PreQ 1 ), or 7-aminomethyl-7-deazaguanine, is a metabolic intermediate in the synthetic pathway that, starting from guanosine-5 ′ -triphosphate (GTP) nucleotide, originates the hypermodified guanine derivate queuosine (Q). Q has been detected both in eubacteria and eukaryotic organisms where it occupies the anticodon wobble position of tRNAs specific for the amino acid asparagine, aspartate, histidine, and tyrosine (Roth et al., 2007). Q modification has been related to an improvement in translation fidelity as well as bacterial pathogenicity. Interestingly, only prokaryotes can synthesize Q via a multistep reaction, whereas eukaryotes are obliged to assimilate the nucleoside through the diet (Eichhorn et al., 2014). In bacteria like Bacillus subtilis (Bs) or Thermoanaerobacter tengcongenesis (Tt), the expression of genes responsible for Q biosynthesis is negatively modulated by the intermediate PreQ 1 intracellular concentration. PreQ 1, binding to a small aptameric RNA motif composed of 34 nucleotides determines the folding of the PreQ 1 riboswitch in an "H-type" pseudoknot structure in which more than half of the nucleobases engage in triplet or quartet interactions (Rieder et al., 2010;Jenkins et al., 2011). The three-dimensional structure of the class I PreQ 1 riboswitch in complex with its cognate ligand was solved by X-ray crystallography (PDB ID 3Q50), and this allowed us to speculate about the quite impressive binding affinity characterizing this endogenous precursor (K d = 2 nM) (Edwards et al., 2010). Even in this case, <40 ns of SuMD simulation proved to be sufficient in describing a binding event between the metabolic intermediate PreQ 1 and its related riboswitch (Supplementary Video 4). As observable in Figure 5A, PreQ 1 recognition mainly articulates in three well-distinguishable phases. In the beginning, the ligand approaches the riboswitch binding site vestibule where it negotiates for about 15 ns the accommodation in the deep cleft before converging, with great geometric accuracy (RMSD min 1.3 Å), toward the solved crystallographic conformation. This behavior has also been captured by the interaction energy graph (Supplementary Figure S4B), highlighting the presence of two major sites visited during the recognition trajectory, i.e., the canonical binding cleft and the aforementioned external vestibular region, located about 10 Å apart. It is interesting to note the comparable interaction energy characterizing these two distal sites, which are distinguishable for their different degrees of solvent exposition. In addition, the dynamic interaction fingerprint reported in Figure 5C, elucidates the role played by the binding site nucleotides during recognition in a coherent way with respect to the results reported on the original publication. All the cases considered so far have confirmed the ability of SuMD to predict reasonable binding hypotheses for different ligands when exploiting as starting point the experimental structures of the ribonucleic targets in which each of these ligands were originally co-crystallized. From a pharmaceutical and applicative perspective, however, it is often required to rationalize the binding mode of compounds that are in most of the cases different from the ones now co-crystallized. It has thus become crucial to understand how the choice of the initial RNA target conformation could affect SuMD performance. The studies performed by the Schneekloth Jr. group in the attempt to experimentally asses the druggability profile of PreQ 1 -I riboswitch through synthetic organic molecules have then given us an opportunity to further explore this question. In a recent scientific work, it the discovery of HMJ was indeed reported; this is a dibenzofuran derivative that, despite the not obvious chemical similarity with PreQ 1 , exhibits a sub-micromolar affinity to the RNA target (K d = 0.5 µM) and the ability to induce premature transcriptional termination (Connelly et al., 2019). The three-dimensional structure determination of the complex was, however, quite difficult and was achieved only by designing a hybrid riboswitch aptamer sequence in which the nucleobase A14, as well as the two vicinal ones, were removed (PDB ID 6E1U). Since this structure lacked a key binding site nucleotides, it represent a non-optimal starting point for a computational study; we therefore decided to investigate the HMJ binding mechanism, exploiting the high-quality riboswitch structure originally solved in the presence of PreQ 1 and then comparing the accuracy of the prediction with the experimental solved data. Encouragingly, even for such a system, the SuMD protocol has succeeded in sampling, in about 30 ns, an extremely accurate binding hypothesis for HMJ, whose RMSD min was computed with respect to reference structure (PDB ID 3Q50) and has reached the impressive value of 0.5 Å (Figure 6A, Supplementary Video 5). From the analysis of the trajectory, it was furthermore possible to confirm how the benzofuran ligand competes with PreQ 1 for the riboswitch binding site. As depicted by Figure 6C, and as is coherent with experimental evidence, HMJ makes a strong stabilizing interaction with the nucleobases G5, G11, and C16, which define the "floor" and the "ceiling" of the binding cleft where the aromatic core stacks, and nucleobase U6, C15, and A29, which shape instead the binding cavity borders. Moreover, the Interaction Energy Landscape (Supplementary Figure S5B) highlights a binding profile similar to the one previously described for the cognate ligand PreQ 1 , confirming the vestibular region's role in recruiting the riboswitch binding partners. Targeting Artificial RNA Aptamers Containing G-Quadruplex Motifs The discovery, made in 1994, that the green fluorescent protein (GFP) from the jellyfish Aequorea victoria could be used as a marker for protein localization and expression has revolutionized molecular biology to the point that, in 2008, the discovery earned a Nobel prize (Swaminathan, 2009). However, since a minimal portion of the human genome is translated into proteins while most of it is transcribed into RNA, being able to investigate the dynamic and spatial properties of the human transcriptome has become essential. As there are no known naturally fluorescent RNAs, a series of in vitro engineered ribonucleic tags able to fold into peculiar three-dimensional structures were selected (Trachman and Ferré-D'Amaré, 2019). These RNAs, through an aptameric domain, can bind fluorophore molecules, increasing their spectroscopic signal and hence allowing for the dynamic monitoring of nucleic acid expression and localization in cells. Most of the fluorophore RNA binding sites, despite the different overall architecture, have evolutionarily converged on G-quadruplex motifs, supporting their important role in enhancing the fluorescence phenomenon, in a similar way to how the β-barrel domains characterize GFPs (Warner et al., 2014). Corn Aptamer Corn is one recently developed RNA aptamer engineered in vitro to bind 3,5-difluoro-4-hydroxybenzylidene imidazolinone-2-oxime (DFHO), a fluorophore analogous of red fluorescent protein (RFP) (Warner et al., 2017). Corn-DFHO differs from other similar RNA tags for its limited light-induced cytotoxicity, its minimal background fluorescence, and its increased photostability, thus representing a valuable imaging tool. Corn aptamer is characterized by an atypical threedimensional structure elucidated by X-ray crystallography and biophysical experiments. How it is observable in Figure 1 that two RNA segments join together in a quasi-symmetric homodimer structure (1:2 chromophore:RNA stoichiometry) at the interfaces where a single DFHO molecule is tightly bound (K d = 70 nM), stacking between two G-quadruplex planes stabilized by the presence of K + ions (Warner et al., 2017). Despite the lack of therapeutic application for this aptamer, which is instead more suitable for molecular biology studies, the investigation of such a complex binding site recognition can be considered as a proof of concept to validate G-quadruplex motif targeting through an SuMD approach. Nucleotide quartet structures, which presence have been extensively characterized in the telomeric terminal portion of eukaryotes chromosomes and within gene promoter regions, are indeed acquiring increasing attention, as they could represent promising pharmaceutical targets (Balasubramanian and Neidle, 2009). As shown in Supplementary Video 6, SuMD methodology has produced a putative binding trajectory for DFHO in <30 ns, converging with an impressive geometrical accuracy toward the experimental solved complex (RMSD min 0.34 Å) ( Figure 7A). Moreover, the Dynamic Total Interaction Energy plot reported on Figure 7C, strongly retraces the interactive pattern already described on the original scientific work, highlighting the role played by nucleotide G12, G25 (first protomer), and g25 (second protomer) in circumscribing the sandwich cavity within which the aromatic chromophore stacks. Nucleobase A14 (first protomer) and a11 (second protomer) instead mediated a hydrogen bond network with oxime and imine moieties of the DFHO ligand, respectively. SuMD simulation has also illuminated how the entire binding process is not driven by the electrostatic contribution, as often it happens for SMIRNA, but is instead controlled by the vdW interactions (Supplementary Figure S6). From this perspective, Corn aptamer represents an unusual, but potentially revolutionary case study, as it distorts an old paradigm that has now since affected the identification of putative RNA binders. DFHO has indeed demonstrated how even apolar or anionic molecules can target ribonucleic acids reaching a nanomolar binding affinity. This provides the opportunity to expand the chemical space explorable by SMIRNA beside that of the wellknown, but often problematic, polycationic compounds. CONCLUSION Over the last decades, among all the biological macromolecules, proteins have represented the target of choice for the development of new drug candidates. Nucleic acids, on the other hand, have so far represented a less attractive target due to the difficulty in guaranteeing a selective recognition mechanism. The recent discovery of peculiar and physiologically stable three-dimensional conformation characterizing RNAs oligomers has, however, paved the way for the investigation of SMIRNA. The increasing availability of structural data for a wide range of relevant therapeutic ribonucleic targets has promoted the application of wellvalidated SBDD computational approaches, such as molecular docking, also in this field. However, the remarkable flexibility and the peculiar electrostatic potential, which distinguish nucleic acids from proteins, have readily highlighted the limitation of many of these methodologies. MD simulations would allow us to overcome some of the aforementioned problems; however, the computational cost required to capture rare events such as ligand binding has so far limited their routine utilization. In this work, we have investigated the applicability domain of SuMD in the field of pharmaceutically relevant RNA polymers. The performances of the protocol were measured as the geometrical accuracy, expressed in terms of RMSD, with which an experimentally solved complex is predicted by the SuMD simulation. Case studies in this research were chosen in such a way as to span very different ribonucleic secondary, tertiary, and even quaternary structures, starting from small duplex stemloops up to pseudoknot or aptameric homodimers, which contain G-quadruplex motifs. Furthermore, the recognition of different ligands was investigated, both small organic molecules and folded α-helical peptides. Although this work must be considered as a preliminary investigation and the number of examples taken into consideration cannot guarantee statistical robustness, it is encouraging to note how, in all the six ribonucleic complexes simulated, SuMD correctly reproduced the experimentally solved final state starting from the unbound state in few hours of simulation. The accuracy of the protocol varies significantly in a system-dependent manner, but, in all the cases, it was possible to collect valuable interactive and energetic information about the nucleotides dynamically involved in the recognition process. Curiously, the RNA target in which the architecture of the binding site is not very complex, such as the stem-loop domain of Influenza A promoter and HIV-1 RRE, are those in which the computational protocol experienced the poorest geometric accuracy in reproducing the ligand-binding mode. A separate consideration must be made for the latter complex (PDB ID 1G70) since the recognition between two extremely flexible entities, i.e., the small peptide and the RNA duplex, represents a very challenging case. However, the results obtained, with an RMSD min lower than 5 Å, are in line with those previously described when applying SuMD methodology to peptide-protein recognition. Moving toward more complex binding sites, such as the one that characterizes pseudoknot riboswitch structures or G-quadruple-shaped clefts, the geometric accuracy of the method progressively improves, with the best results obtained in the artificial aptameric structure (RMSD min 0.34 Å). These findings are in agreement with a recent perspective work that assessed how the complexity of an RNA binding site, measured in terms of information content, could represent a valuable discriminant to individuate druggable oligonucleotides (Warner et al., 2018). Indeed, the three-dimensional complexity of a binding site makes ribonucleic pocket more similar to a proteinlike environment rather than an ordered and repetitive structure like that characterizing DNA. Furthermore, the high conformational flexibility that has characterized all the investigated ribonucleic structures (RMSD computed on RNA backbone are reported on Supplementary Material) during SuMD simulations has evidenced the importance of adopting techniques able to consider the flexibility of both macromolecules and ligands to better describe such complex molecular recognition. In conclusion, we have shown how SuMD can be a valid computational method to generate binding hypothesis for ribonucleic targets in a nanosecond timescale, explicitly considering both the role of the solvent and the flexibility of the macromolecule. SuMD simulation results could not only help with the interpretation and investigation of the complex mechanism of recognition characterizing SMIRNA, especially when structural information is not available, but they could also guide the rational discovery and optimization of these compounds. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation, to any qualified researcher. AUTHOR CONTRIBUTIONS MB carried out the experiment. MB wrote the manuscript with support from MS. MS and SM supervised the project. MB and SM conceived the original idea.
2020-02-27T14:07:00.739Z
2020-02-27T00:00:00.000
{ "year": 2020, "sha1": "cb36e9ece78a4c5e1cf70fb5c10205b77b4ab375", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fchem.2020.00107/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cb36e9ece78a4c5e1cf70fb5c10205b77b4ab375", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
269438530
pes2o/s2orc
v3-fos-license
Global Ethical Principles in Healthcare Networks, Including Debates on Euthanasia and Abortion In today's ever-evolving healthcare landscape, the focus is shifting towards integrated care, inter-organizational cooperation, and healthcare networks (HCNs) as alternatives to traditional healthcare institutions. This transformation is driven by factors such as an aging population and increasing healthcare costs, necessitating a reevaluation of ethical considerations to ensure the well-being of patients remains central. This review provides a narrative overview of ethics within HCNs, with a focus on patient-centered medical ethics. It elaborates on the four fundamental ethical principles, namely justice, beneficence, nonmaleficence, and autonomy. The principle of justice underscores HCNs' ethical obligation to provide equitable and transparent access to all patients, ensuring fairness in resource allocation. The principle of nonmaleficence underscores the responsibility to prioritize patient safety, while beneficence obliges HCNs to ensure continuity of care across all dimensions. Furthermore, the principle of autonomy is redefined as a commitment to actively promote and respect patient choices. HCNs that do not adhere to these ethical principles raise concerns and lack ethical justification. Additionally, the review delves into the legal aspects of euthanasia and abortion, both of which present complex ethical challenges in healthcare systems globally. A comparative analysis is provided, examining their legal status in Islamic countries, European nations, and the United States. This study sheds light on the evolving ethical landscape in HCNs and the diverse global perspectives on contentious issues. Therefore, harmonizing legislation and defining appropriate boundaries are crucial steps toward upholding ethical standards in healthcare practices on a global scale. Introduction And Background Healthcare organizations are stepping towards a new generation.Integrated care, inter-organizational cooperation, and healthcare networks (HCNs) are rapidly becoming the focal points of attention in the delivery of medical treatment, replacing the separate healthcare institutions that have historically served as the sector's primary support structure [1].These shifts are the consequence of a number of different causes, including an aging population and the rising costs of the welfare system [1], which have led to a decrease in the availability of medical services [1].There are high expectations for these types of partnerships and HCNs, which, according to the findings of certain research, have the potential to boost economic efficiency [2] and ultimately result in an improvement in the quality of medical treatment [3].However, improving the quality of treatment does not provide ethical justification. When discussing ethical duties for networks, one of the most prevalent approaches is to examine the problem from the point of view of the ethics of organizations or businesses.This makes sense since the question is about how organizations like HCNs should be set up.Following this comes the question of how the legal and moral responsibilities should be dispersed across the vast network.However, studies demonstrate that patients don't behave as equal or rational consumers within the healthcare sector [4].Many individuals seeking medical services don't assess different healthcare organizations or networks in terms of service quality and costs [1].These factors underscore the potential dangers of assuming that healthcare organizations function primarily as businesses.In addition, there is a possibility that adopting a business ethics strategy may result in a devaluation of the pivotal role that patients play in the healthcare system since this strategy will consider patients to be only one stakeholder among many others.It is possible that as a result of this, the interests of patients will be given less weight than those of other stakeholders [1]. However, the fundamental concept of medical practice is predicated on the ethical imperative of placing the patient's well-being ahead of all other concerns.Thus, it would be more beneficial to establish an ethical approach to HCNs that begins with patients and the ethical obligations they might make on these networks [1].Enshrined in medical ethics rules all around the globe is the responsibility that healthcare professionals have to put their patients' needs ahead of their own, which is often considered to be the most important ethical commitment they have [5]. Ethical requirements for HCNs provide a challenge, but it is feasible to establish specific ethical duties by 1 using the traditional four principles of medical ethics: autonomy, justice, beneficence, and nonmaleficence [1].Therefore, when faced with moral dilemmas on the job, medical professionals and other healthcare workers may refer to these principles for guidance in reaching a decision.The principles of medical ethics serve as a framework for the interactions that doctors have with their patients, their colleagues, and society in general.It provides behavioral and decision-making standards that assist physicians in understanding what is expected of them by their colleagues, their patients, and society in general.Also, the ethical rules of the World Medical Association (WMA) serve as the foundation for deciding what constitutes suitable behavior on the part of physicians with regard to these issues [5]. One strategy to follow in HCNs is to make medical ethics an explicit focus of attention in the physicianpatient relationships.These connections often contain ethical conflicts between two or more interests, which doctors are obligated to notice and figure out how to address.In addition to this, it throws light on key social issues that are relevant to the practice of medicine such as euthanasia, abortion, organ transplantation, and end-of-life (EOL) medical research [6].This review outlines and examines ethical issues in HCNs' ethical obligations toward individual patients and medical practice in general, with a specific focus on those issues that are faced by physicians in their relationships with their patients. Method A narrative literature review was conducted using PubMed and Google Scholar databases, focusing on English sources published after 2013 that addressed principles of ethics and their challenges in HCNs.The rationale behind selecting articles from the last decade was to ensure access to the most recent and updated information pertaining to ethical considerations and contextual issues in HCNs.The search terms employed included Principle of Ethics, Autonomy, Justice, Beneficence, Nonmaleficence, Euthanasia, and Abortion.Inclusion criteria encompassed articles discussing these principles and their application in HCNs, as well as debates surrounding euthanasia and abortion within this context.Following the search, articles were screened and sorted based on relevance to the research objectives.Subsequently, the necessary data was extracted and independently evaluated for reliability.This systematic approach aimed to provide a comprehensive understanding of ethical principles and challenges in HCNs, along with debates on contentious issues like euthanasia and abortion. Review The patient and the four principles Patients have the ability to make substantial ethical claims on the organizations and networks that provide healthcare.In clinical treatment, patients can anticipate that HCNs will fulfill their ethical duties of autonomy, justice, beneficence, and nonmaleficence (Figure 1).This is a common method for classifying these ethical claims, and it is based on the well-known and widely accepted principles of Beauchamp and Childress [7].While this framework is often used to discuss the ethics of patient-physician relationships in clinical care, I believe this responsibility applies to HCNs as well.In clinical care, practitioners of healthcare have ethical obligations toward particular patients.However, in HCNs, these duties take the shape of more generic "duties to design" [8].This comprises requirements to construct the network in such a way that the interests of all patients (both existing and prospective) inside the network are preserved. HCNs' ethical duties toward patients Justice and Access to the Network Justice is sometimes described as the fair, equitable, and right treatment of persons.Of the numerous varieties of justice, distributive justice is the most important to clinical ethics.In the context of healthcare, "distributive justice" relates to the fair, equitable, and adequate allocation of healthcare resources.This distribution is established by justifiable norms that frame the conditions of social cooperation [9].It is generally accepted that every patient has an ethical right to access healthcare that is fair and equal [10].The subject of justice is especially crucial for HCNs since they have the ability to greatly influence how healthcare is structured and provided within a healthcare system. It is an undeniable fact that the resources of HCNs are limited, and it is necessary to choose a principle of distributive justice that is reasonable for distributing these restricted resources.If access is refused or limited, it should be done for legitimate reasons that are transparent, known to and understood by the patient.So, patients have a legal right to be informed about the process by which access to a network is governed, as well as the guiding concepts that underlie this process.Access to healthcare is determined by a number of factors, including socioeconomic status and geographical location.Numerous studies have shown that those with better socioeconomic status have easier, quicker, and more convenient access to medical treatment.Socioeconomic status is a major contributor to healthcare disparities in the United States [11].In terms of geography, the choice of an HCN to provide a certain service at a single site may benefit people who have the physical and financial ability to go to this area.Healthcare service must ensure that such a strategy does not interfere unnecessarily with a patient's ability to choose their own healthcare provider and, more significantly, does not unjustly favor individuals who live near the network hospitals delivering a specific care.It is important to organize the delivery of certain medical services inside that network to ensure that all potential patients within the network's coverage have access [1]. Nonmaleficence and Safety Nonmaleficence is a cornerstone of medical ethics.The famous "first do no harm" is often equated to this principle.The idea of injury, on the other hand, is still up for dispute, as both Beauchamp and Childress pointed out in their analysis of this principle.The terms "wronging" and "hurting" are occasionally distinguished from one another.Wronging is thought to be a violation of one's legal or moral rights while harming is understood to be a setback to one's interests [7].As a result, one may be treated unfairly without really suffering any physical injury.For instance, if a patient's private medical data are shared with third parties who do not have justified access to these data, the patient has been wronged.This is the case even if the patient was uninformed of this sharing and experienced no loss of interest as a consequence.We regard the nonmaleficence principle to be wider than the "first-do-no-harm" concept and to include the obligation not to mislead patients. Nonmaleficence refers to a physician's obligation to avoid inflicting needless suffering on a patient.This idea, which is articulated in a plain way, gives support for a variety of moral rules, some of which are as follows: do not commit murder, do not cause pain or suffering, do not incapacitate, do not offend, and do not deprive others of the joys of life.In the field of medicine, the ethical principle of nonmaleficence is put into practice when a doctor weighs the benefits of a patient's care against the costs of any and all possible interventions and treatments, avoids those that are unnecessarily burdensome, and decides which course of treatment will be most beneficial to the patient by considering all of the options available to them [9]. Beneficence and Continuity of Care Patients also have the right to make the valid argument that HCNs are structured so as to be of the greatest possible benefit to them.First, this indicates that HCNs have to be justified in terms of the degree to which they either give more care or enhance the quality of care already provided.It is often assumed that HCNs may increase quality, such as by increasing the likelihood of delivering seamless and integrated treatment in an environment in which medical professionals from a variety of specialties collaborate.Secondly, when patients become part of a healthcare network (HCN), they will undergo a specific path of care, which may involve specific healthcare providers within the network.However, from the patient's point of view, how this path is organized is not usually of significant concern.The main focus for patients is their right to receive high-quality care, and ethically, they also have the right to expect this care to be delivered smoothly and consistently throughout their entire treatment journey within the network [12].This trajectory's structure is mostly immaterial from the patient's standpoint.Patients not only have a legal right to get great medical treatment, but they also have an ethical right to receive care that is consistent, effective, and of a high standard throughout the whole of their care journey.Therefore, HCNs need to be constructed in such a way as to optimally assure continuity of care.There are three dimensions involved here, as given below. Ensure information continuity: HCNs must guarantee patient information is transmitted securely and efficiently.The HCN must find a balance between disclosing too much (which might injure the patient and violate the duty of nonmaleficence) and too little (which fails to maximally benefit the patient and breaches the duty of beneficence). Ensure managerial continuity: A network should have a common strategy for managing a health issue.Patients may be harmed if transported inside a network and treated differently each time. Ensure relational continuity: Relationships between patients and physicians continue to be the foundation of contemporary medical practice. Patient Autonomy Immanuel Kant (1724-1804) and John Stuart Mill (1806-1873) viewed autonomy as an ethical concept based on the idea that all people have inherent and unconditional value and should be able to make logical judgments and moral choices and use their ability for self-determination [13].In 1914, Justice Cardozo wrote a ruling in which he supported this ethical concept by saying, "Every human being of mature years and sound mind has a right to select what will be done with his own body" [9].Autonomy encompasses individual privacy, freedom of choice, self-control, and the ability to make one's own well-informed judgments.In accordance with the ethical concept of autonomy, HCNs have the moral imperative to both respect and actively promote patient autonomy in their practice.This principle could involve the freedom to make as many autonomous decisions as possible about one's treatment trajectory, as well as the right to choose one's chosen healthcare expert or provider.HCNs should become maximally efficient while protecting patient autonomy.Thus, it is conceivable for HCNs to create a network in such a way that patients have a greater chance of being matched with caregivers who deliver the highest quality of care that is consistent with the patients' individual values [1].The autonomy principle does not apply to those who lack the ability to act independently.This includes infants, children, and individuals with developmental, mental, or physical disorders.Healthcare organizations and state governments in the United States have rules and processes to judge incompetence.When the principle of autonomy is respected, the physician is obligated to provide the patient with the medical information and treatment options that are required for the patient to engage in self-determination.Respecting the concept of autonomy also supports informed consent, confidentiality, and truth-telling [9]. Each of the four ethical standards must be followed unless they conflict with one another.The physician must assess the real responsibility to the patient by weighing the conflicting prima facie responsibilities based on content and context.This ethical obligation to promote autonomy may conflict with HCNs' beneficence duty in specific cases.For example, an HCN may assign a given healthcare service to one network member, sending all patients in need to that institution.All patients at one facility may benefit; however, if networks are constructed such that patients or their data are automatically moved to certain institutions or placed on a particular treatment route, this might contradict the ethical concept of patient autonomy, since patients would be unable to choose where and how they are treated [1].Furthermore, physicians face many complicated and challenging problems.For example, when a patient in shock treated with an intravenous catheter goes through discomfort, fluid resuscitation, and edema.Beneficence overcomes nonmaleficence.Consider a patient's denial of a life-saving intervention such as mechanical ventilation or desire for a life-ending measure like withdrawing mechanical ventilation.When there is a conflict between autonomy and beneficence, ethical decision-making is most challenging [9]. Medical ethics Medicine dates back thousands of years and has undergone gradual change over that time.The earliest mentions of it come from ancient Egyptian and Oriental civilizations, but beyond that, we have very little information about its early past.Hippocrates was the first person to distinguish medicine (as a science) from philosophical notions and magic, both of which were often applied in patient care during his historical period.In other words, Hippocrates was the first person to distinguish medicine as a science.Despite this, a good number of these guiding principles are being practiced today after 2500 years.Hippocratic medicine is like a social contract in that the code of ethics creates a set of standards to be followed by physicians.As a synthesis of this contract, the Hippocratic Oath was created in ancient Greece, and Herodotus and maybe Homer contributed to the composition [14].A fragment of the Hippocratic oath on the 3rd-century Papyrus is shown in Figure 2.Even though it has been updated in various ways, the present version of the Oath still upholds the same ethical principles.The original Hippocratic Oath translated into English [14] is given below.I swear by Apollo the physician, and Aesculapius the surgeon, likewise Hygeia and Panacea, and call all the gods and goddesses to witness, that I will observe and keep this underwritten oath, to the utmost of my power and judgment.I will reverence my master who taught me the art.Equally with my parents, will I allow him things necessary for his support, and will consider his sons as brothers.I will teach them my art without reward or agreement; and I will impart all my acquirement, instructions, and whatever I know, to my master's children, as to my own; and likewise, to all my pupils, who shall bind and tie themselves by a professional oath, but to none else. With regard to healing the sick, I will devise and order for them the best diet, according to my judgment and means; and I will take care that they suffer no hurt or damage. Nor shall any man's entreaty prevail upon me to administer poison to anyone; neither will I counsel any man to do so.Moreover, I will give no sort of medicine to any pregnant woman, with a view to destroy the child. Further, I will comport myself and use my knowledge in a godly manner.I will not cut for the stone, but will commit that affair entirely to the surgeons. Whatsoever house I may enter, my visit shall be for the convenience and advantage of the patient; and I will willingly refrain from doing any injury or wrong from falsehood, and (in an especial manner) from acts of an amorous nature, whatever may be the rank of those who it may be my duty to cure, whether mistress or servant, bond or free. Whatever, in the course of my practice, I may see or hear (even when not invited), whatever I may happen to obtain knowledge of, if it be not proper to repeat it, I will keep sacred and secret within my own breast. If I faithfully observe this oath, may I thrive and prosper in my fortune and profession, and live in the estimation of posterity; or on breach thereof, may the reverse be my fate![12] Ethics has always been an essential part of the medical profession, regardless of historical period or country.The principles of medical ethics guide the interactions that physicians have with their patients, their colleagues, and society in general.As a result of the fact that ethics are founded on philosophies, religions, and political ideologies, there are significant differences in the medical ethics that are practiced in different countries [6].Both euthanasia and abortion are two of the most controversial topics in ethics, medicine, and the law that have dominated the 21st century and among religions.These topics have sharply divided the scientific and nonscientific public into advocates and opponents of each issue.Here, a brief comparison of their legal status and medical ethics in Islamic countries, European countries, and the United States is provided. Euthanasia Euthanasia is the practice of intentionally ending life to eliminate pain and suffering.It is widely seen as a humanitarian approach to terminal prognosis and patient pain.Emotional arguments, usually about severe circumstances, challenge the morality against physicians taking human life prematurely.This ethical concern has not only involved physicians but has also captivated legal and sociological experts worldwide throughout history.Legislators generally align with one of three approaches: outright prohibition of euthanasia, equating it with ordinary or privileged murder, or permitting it under certain prescribed conditions [9]. In Islamic countries, euthanasia is considered to be equivalent to murder.Thus, it is outlawed in all nations that are governed by Islamic religious beliefs.Iran does not make an exception.In other Islamic countries like Turkey and a portion of Bosnia and Herzegovina, euthanasia is seen in the same light as other forms of murder and is subject to the same severe penalties [15].However, the Benelux nations (Netherlands, Belgium, and Luxembourg) are Western European nations that do not consider the taking of life from people to be a crime if it is done in accordance with established legal guidelines and medical protocol.In the Netherlands, euthanasia could be requested not just by competent adults, but also by youth above the age of 12 [16].While euthanasia is illegal in the United States, 10 states and Washignton, DC, have legalized physician-assisted suicide [17].In this approach, we demonstrate how a similar scenario in real life might be governed quite differently by various legal systems. Abortion Abortion is one of the most commonly debated medical ethics topics in the world.Induced abortion is a practice that may be found in all nations, but the choice to terminate a pregnancy must take into account a wide range of factors, including those pertaining to medicine, ethics, morality, religion, society, the economy, and the legal system. There are differing perspectives on abortion in Islamic ethics.Whether the fetus is believed to be alive is the first source of disagreement.It is believed by some sources that ensoulment occurs about 120 days after conception.Ensoulment is considered to be the first tangible evidence of life.This suggests that it is illegal to get an abortion beyond the first 120 days of a pregnancy.However, it is not plausible to assert that Islam permits abortions prior to the 120 days of pregnancy.Turkey is the only Muslim nation with secular democracy and legal abortion [18].In addition, the laws of each state in the United States have drastically different policies regarding the legality of abortion as well as the many limits that are placed on the operation.However, today almost all European countries allow abortion on request [19].Thus, the investigation has shown that the agreement law addressing abortion is more complex than could be expected.This review's strength lies in its meticulous approach to exploring ethical principles and challenges in HCNs through a study of literature from reputable databases.However, a potential limitation arises from its exclusion of countries like China and India, along with other oriental nations, which may limit the global applicability of the findings.Furthermore, the absence of a discussion on racial discrimination within HCNs could limit the depth of the analysis. Conclusions In response to significant obstacles, the healthcare system is undergoing a fast transformation.At the moment, a lot of people are placing their hopes on comprehensive HCNs and integrated medical services.The empirical implications of such networks are being discussed but it is important that a greater focus be given to the ethical considerations involved.We are in need of a framework that will allow us to evaluate how and when HCNs are justifiable so that we can address the new and serious ethical challenges that they generate.Furthermore, how to place medical ethics more firmly into medical practice continues to be a central concern of physician training and practice in healthcare systems.This review also shows how a life situation may be regulated differently in different legal areas and this is another challenge faced by HCNs. FIGURE 2 : FIGURE 2: A fragment of the Hippocratic oath on the 3rd-century Papyrus Oxyrhynchus 2547 Image Source: This file comes from Wellcome Images, a website operated by Wellcome Trust, a global charitable foundation based in the United Kingdom.This file is licensed under the Creative Commons Attribution 4.0 International license.
2024-04-29T15:16:31.615Z
2024-04-01T00:00:00.000
{ "year": 2024, "sha1": "a3483f57f3b8af77c9590925f8af94f736f77808", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "fec54e08f19b49761be4abfabb495908a8999e9a", "s2fieldsofstudy": [ "Medicine", "Philosophy" ], "extfieldsofstudy": [ "Medicine" ] }
209331499
pes2o/s2orc
v3-fos-license
Phytosynthesis of BiVO4 nanorods using Hyphaene thebaica for diverse biomedical applications Biosynthesis of bismuth vanadate (BiVO4) nanorods was performed using dried fruit extracts of Hyphaene thebaica as a cost effective reducing and stabilizing agent. XRD, DRS, FTIR, zeta potential, Raman, HR-SEM, HR-TEM, EDS and SAED were used to study the main physical properties while the biological properties were established by performing diverse assays. The zeta potential is reported as − 5.21 mV. FTIR indicated Bi–O and V–O vibrations at 640 cm−1 and 700 cm−1/1120 cm−1. Characteristic Raman modes were observed at 166 cm−1, 325 cm−1 and 787 cm−1. High resolution scanning and transmission electron micrographs revealed a rod like morphology of the BiVO4. Bacillus subtilis, Klebsiella pneumonia, Fusarium solani indicated highest susceptibility to the different doses of BiVO4 nanorods. Significant protein kinase inhibition is reported for BiVO4 nanorods which suggests their potential anticancer properties. The nanorods revealed good DPPH free radical scavenging potential (48%) at 400 µg/mL while total antioxidant capacity of 59.8 µg AAE/mg was revealed at 400 µg/mL. No antiviral activity is reported on sabin like polio virus. Overall excellent biological properties are reported. We have shown that green synthesis can replace well established processes for synthesizing BiVO4 nanorods. Introduction Phytosynthesis of nanoscaled materials is an innovative approach often considered as a potential replacement for various chemical or physical methods. The inherent nature of the chemical process often led to produce toxic wastes while the physical means are often accompanied with elevated energy requirements (Ovais et al. 2018b;Khalil et al. 2019a, b;Shah et al. 2018;Hassan et al. 2018). Relative to the biologically synthesized nanoparticles, the chemically synthesize nanoparticles indicate low biocompatibility and possess latent biological risks. In order to keep the energy balance and mitigating environmental risks, plants are used as a versatile bio-reductant for the synthesis of various metal nanoparticles or their nanocomposites. Medicinal plants possess a diverse reservoir of phytochemicals which can reduce and stabilize the nanoparticles (Ovais et al. 2018c;Mohamed et al. 2019). Metal and metal based nanomaterials have diverse applications in different fields and therefore a number of scientists have adopted novel methods for synthesis and application Thema et al. 2016;Devika et al. 2012;Mwakikunga et al. 2010;Khamlich et al. 2011). Metal vanadate have been frequently looked for potential applications as implantable cardiac defibrillators, batteries, catalysis and photo catalysis (Sivakumar et al. 2015). However, bismuth vanadate has emerged as a promising candidate due to its unique physiochemical, optical and ferro-elastic properties (Sarkar and Chattopadhyay 2012). Various applications of BiVO 4 has been well studied in water splitting, sensors, pollutant degradation etc. (Ma et al. 2019;Vo et al. 2019;Prado et al. 2019;Jaihindh et al. 2019;Hassan et al. 2019;Chomkitichai et al. 2019). Recently, there has been growing interest in biological applications of BiVO 4 . The AgI-BiVO 4 composite material indicated excellent potential for inactivation of Escherichia coli in water disinfection (Guan et al. 2018). Similarly, the octahedral shaped BiVO 4 synthesized via hydrothermal approach revealed inactivation of E. coli (80% to 100%) (Sharma et al. 2016). 100% inactivation of E. coli after 30 min of exposure to Ag loaded BiVO 4 is also reported (Regmi et al. 2018). BiVO 4 is among few materials that remain stable in mild pH neutral conditions (Lichterman et al. 2013). Due to the exciting properties and applications, there is considerable interest for the commercially scalable process for synthesizing BiVO 4 . Different methods like, ultrasonicassisted, hydrothermal, pyrolysis, flame spray, chemical bath deposition, sonochemical, template-free solution and co-precipitation method have been explored to synthesize BiVO 4 nanoparticles (Hu et al. 2018;Tao et al. 2019). Recently, plant extracts of Callistemon viminalis were used as a low cost reducing and stabilizing agents for biosynthesis of BiVO 4 (Mohamed et al. 2018). Complementing to the limited literature on green avenues and biological properties of BiVO 4 , a green method was adopted by using dried fruit aqueous extracts of Hyphaene thebaica as green scaffolds for the synthesis of rod shaped BiVO 4 which were subsequently studied for various biological properties. H. thebaica is a member of Arecaceae, locally referred as Doum (Arabic) and gingerbread tree (English). The medicinal applications of H. thebaica is well reported in the ethnomedicinal and folkloric scriptures (Khalil et al. 2019a, b). Various preparation of H. thebaica is reported for bleeding, haematuria, dyslipidemia, antihypertensive, diuretic diaphoretic, hypertension and lowering blood pressure etc. (Abdulazeez et al. 2019). In view of the medicinal applications and ethnopharmacological relevance, the fruit part of the H. thebaica was selected for biosynthesis. Processing of plants The fruits of H. thebaica were obtained from (Aswan) Egypt, gently washed in running distill water for removing dust/impurities or any form of particulate matter, shade dried, powdered and used for extraction by heating 10 g of powdered fruit material to 400 mL of distil water at 100 °C/2 h on magnetic stirrer hotplate. Residual wastes were removed by filtering extracts for three times with Whattman filter paper and the remaining transparent extracts were used further. Biosynthesis of BiVO 4 Bismuth nitrate (2.448 g) was added to 50 mL aqueous extracts and heated at 100 °C/1 h for ensuring complete dissolution of precursor salt. In a separate flask, VOSO 4 (1.126 g) was introduced to 50 mL extracts and heated at 100 °C/1 h. Change in color was observed. Both solutions were mixed to make a mix of bismuth and vanadium ions, proportionally mixed to form bismuth vanadate. The resultant precipitates were washed three times by centrifugation and dried at 100 °C. The dried precipitate was annealed at 500 °C for 2 h in a tube furnace which yielded yellow colored powder assumed as BiVO 4 . Annealing was performed to obtain a high degree crystallinity and purity. Physical properties Diverse techniques were applied to elucidate the main physical properties of green synthesized BiVO 4 . Powder X-ray diffraction was carried with diffractometer equipped with an irradiation line of 1.5406 A 0 Cu Kα operating in Bragg-Brentano geometry. Debye Scherer formula was used to calculate the nano size while the data was compared with standard diffraction database. Vibrational characteristics were studied using Raman spectroscopy and FTIR. Diffuse reflectance spectra was recorded. Morphology was studied using HR-SEM and HR-TEM. Elemental composition was analyzed by Energy Dispersive Spectroscopy while Selected Area Electron Diffraction and zeta potential was also investigated. Once the physiochemical nature of the nanoparticles was established, they were then processed for analyzing their biomedical applications. Antimicrobial properties Simple well diffusion assay as described earlier (Khalil et al. 2014) was used at different concentration to investigate the antibacterial and antifungal potential of the BiVO 4 nanorods in the concentration range of 4 mg/mL to 250 µg/ mL. (FCBP 434). Briefly, the microbial cultures were standardized to an optical density of 0.5, corresponding to the MacFarland standards. 100 µL of inocula was dispensed on the Tryptic Soy Agar (bacterial media) and Sabouraud Dextrose Agar (fungal media) plates which was uniformly spread with sterile cotton swabs. Through sterile borer, 5 mm wells were made and 30 µL samples was introduced. Erythromycin and Amp B were used as positive control for bacteria and fungi respectively, while DMSO was added as a negative control. The bacterial cultures were incubated at 37 °C for 24 h while the fungal plates were incubated at 37 °C for 72 h. Zones of inhibition was measured and the MIC was considered as the least test concentration to cause microbial inhibition. Protein kinase inhibition Streptomyces 85 E cultured on ISP4 medium was used to assess the PK inhibition as described previously (Fatima et al. 2015), from 4 mg/mL to 250 µg/mL. The standardized culture (100 µL) was dispensed on the media plates and spread uniformly. 5 mm borer was used to make wells and the test samples were introduced followed by incubation for 72 h at 30 °C. Bald and clear zones were measured while DMSO and Streptomycin were used as negative and positive controls respectively. Antioxidant assays DPPH free radical scavenging and total antioxidant capacity were performed in the concentration range of 400-25 µg/mL, through a spectrophotometer based method as described previously (Hameed et al. 2019). The DPPH reagent solution was prepared by dissolving DPPH (9.6 mg) in methanol (100 mL). Test samples (20 µL) was added to DPPH reagent (180 µL), and the incubated for 20 min in dark. Results were recorded at 517 nm, and calculations were performed according to; Total antioxidant capacity (Karunakaran et al. 2016) was investigated using phosphomolybdenum based method and the results were expressed as ascorbic acid equivalents per milligram. Hemolysis Hemolytic activity was performed as described previously (Malagoli 2007). Erythrocytes were isolated from freshly collected human blood in EDTA tubes, and their subsequent centrifugation at 14,000 RPM/5 min. 200 µL erythrocytes were added to 9.8 mL PBS for making erythrocytes suspension. The test nanoparticles in different concentrations were introduced in the Eppendorf tubes having an equal amount of the made erythrocytes suspension and incubated for 1 h at 35 °C. The reaction mix was then centrifuged at 10,000 RPM/10 min. Obtained supernatant was dispensed gently in 96 well plates and the hemoglobin release was monitored at 540 nm. Hemolysis was determined using the following formula; Cell culture and antiviral experiments Human Rhabdomyosarcoma Cells (RD), Human Laryngeal Carcinoma (HEp-2 cells) and L20B cells (mouse fibroblast cells) were enriched in Eagle's Minimal Essential Medium (E'MEM) containing (10%) FBS. Propagation of Sabin like Poliovirus (Type 1) was done through HEp-2 cells supplemented with 2% FBS. Viral titers were determined using Karber formula after titration of the virus on RD cell (Thuy et al. 2013). Assessment of cytotoxicity MTT assay was used for cytotoxicity assessment with slight modifications (Lin et al. 2005). MTT assay is based on the mitochondrial dehydrogenase of viable cells, giving blue formazan product quantified spectrophotometrically. MTT assay was performed in 96-well plates, seeded with 100 µL RD cells, HEp-2 cells and L20B cells at a concentration of 3.5 × 10 5 cells/mL cultured in E'MEM (200 μL) containing FBS (10%) and incubated at 36 °C for 48 h in CO 2 incubator to maintain a stable normal cell monolayers. Afterwards cells were treated with different doses of BiVO 4 NPs (1000-15 μg/mL), and incubated for an additional 48 h at 36 °C. Cells were examined daily under inverted light microscope to determine the minimum concentration of BiVO 4 NPs resulting in morphological changes in cells. 100 μL of MTT solution (5 mg/mL) was introduced to wells after removing the media and incubated (4 h/37 °C). MTT solution was then discarded and 50 μL dimethyl sulfoxide (DMSO) was added to dissolve insoluble formazan crystals and incubated (37 °C/30 min). Optical density (OD) was measured at 540 nm using a spectrophotometer reader (victor × 3, Perkin Elmer). Data were obtained from triplicate wells. Cell viability was expressed with respect to the absorbance of the control wells (untreated cells), which were considered as 100% of absorbance. The percentage of cytotoxicity is calculated as where A and B are the OD540 of untreated and of treated cells, respectively. The 50% cytotoxic concentration (CC50) was defined as the compound's concentration (μg/mL) required for the reduction of cell viability by 50%. Assessment of antiviral activity Confluent RD, Hep2C and L20B cell culture were treated with mixture of BiVO 4 NPs and virus dilutions. Firstly, 100TCID 50 poliovirus type 1 were diluted tenfold into two concentration of 10TCID 50 and 1TCID 50 in 2% E'MEM and introduce to non-cytotoxic concentrations of BiVO 4 NPs (15 µg/mL) in ratio of 1:1 (v/v) and incubated for 1 h at 36 °C . After that, mixture of virus dilutions (100TCID, 10TCID50 and 1TCID 50 ) was incubated with BiVO 4 NPs (1 mg/mL to 15 µg/mL) in 96 well plate seeded with healthy monolayer of Hep2C cells (3.5 × 10 5 cells/mL) in a CO 2 incubator with 5% CO 2 . Cell growth 10% EMEM medium was decanted and replaced with 200 µL media respectively. Three controls were used including: (i) 50 μL of BiVO 4 NPs at 15 µg/mL concentration (without poliovirus) were added to wells containing RD cells for BiVO 4 NPs control (Magudieshwaran et al. 2019); 50 µL of poliovirus at concentration of 1TCID 50 , 10TCID 50 and 100TCID 50 was added to wells (iii) 200 μL of fresh maintenance medium was added for negative controls. The cultures were incubated at 36 °C post-infection, and cytopathic effect (CPE) was daily observed by inverted light microscopy. The cellular viability was determined through staining method using crystal violet. Optical density (OD) was measured at 490 nm using a spectrophotometer. Physical characterizations Hyphaene thebaica dried fruit aqueous extracts were used as bio reductant for synthesis of novel BiVO 4 nanorods. Different techniques were used to characterize the room temperature physiochemical properties of the nanorods. The overall process and study scheme has been summarized in Fig. 1. The extracts were treated separately with precursor salts of bismuth and vanadium giving light brown and blue color. Powdered X-ray diffraction was carried out to reveal the crystallographic properties and presence of BiVO 4 nanorods. Figure (321) and (123) respectively. These crystallographic peaks were in correspondence with the JCPDS pattern 00-014-0688 for Clinobisvanite phase monoclinic bismuth vanadium oxide (BiVO 4 ). Sharpness of the peaks indicate a highly crystalline nature of the BiVO 4 . No other peaks were detected which suggested the single phase purity of the BiVO 4 nanorods. The crystal structure belonged to space group I2/a with lattice parameters were deduced as 〈a〉 = 5.1 A°, 〈b〉 = 11.7 A°, and 〈c〉 = 5.09 A° correlating to the BiVO 4 with yellow color. Scherer approximation revealed average size of ~ 7 nm as indicated in Table 1A. After establishing single phase purity of BiVO 4 nanorods, their elemental analysis were carried out using Energy Dispersive Spectroscopy as indicated in Fig. 2b. Spectral analysis confirmed the presence of "Bi", "V" and "O". The peak of "C" relates to the grid support. Some traces of "Cu" and "K" were also found that most probably emanates from the organic components of the fruit material. Figure 2c indicate the FTIR spectra of the biosynthesized BiVO 4 nanorods from 200 to 4000 cm −1 . Main absorption peaks were observed centered at ~ 640 cm −1 , ~ 700 cm −1 , ~ 1120 cm −1 , ~ 1625 cm −1 and 3400 cm −1 . Peak centered at ~ 640 cm −1 can be ascribed to Bi-O (bending) while at ~ 700 cm −1 and ~ 1120 cm −1 to V-O symmetric and asymmetric vibrations (Khan et al. 2017). IR peaks centered at ~ 1625 cm −1 and 3400 cm −1 can be ascribed to the stretching vibrations of O-H group. Raman spectroscopy is considered as a powerful technique to probe structure of metal oxides. Raman spectroscopy was carried out to further elaborate the vibrational properties of BiVO 4 nanorods in the spectral range of 0 cm −1 to 1500 cm −1 . Three noticeable raman peaks were observed centered at 166 cm −1 , 325 cm −1 and 787 cm −1 . The intense stretching mode of VO 4 is observed at 787 cm −1 . Raman peak centered at 325 cm −1 represents the asymmetric bending mode of VO 4 tetrahedron (Brack et al. 2015). Peak centered at 166 cm −1 , is attributed to the external mode vibration (Xu et al. 2018;Nikam and Joshi 2016). The raman spectra of BiVO 4 nanorods is indicated in Fig. 2d. The UV-Vis diffuse reflectance spectrum was recorded from 0 to 3000 nm. The BiVO 4 nanorods revealed good visible light absorption with absorption edge at 487 nm. The steep shape is ascribed to the band gap transitions. The energy of the band gap is estimated to be ~ 2.54 eV. DRS spectra has been indicated in Fig. 3. Inset Fig. 4A-F indicate the various high resolution microscopic images of the synthesized nanoparticles to establish their morphology. One can conclude the formation of well aligned rod shape of the BiVO 4 . The Selected Area Electron Diffraction pattern suggest crystalline nature of the nanorods as indicated in Fig. 4F. Zeta potential of the BiVO 4 nanorods was recorded as − 5.21 mV. Results are indicated in Table 1B. Antimicrobial properties The antimicrobial properties of the BiVO 4 nanorods have been explored against various bacterial and fungal strains. Results of the antibacterial and antifungal properties are indicated in Fig. 5a, b. Among the tested bacterial strains, B. subtilis revealed highest zone of inhibition (20 mm to 9.5 mm) in the concentration range of 4 mg/ mL to 250 µg/mL. The least susceptible strain was found to be E. coli which revealed maximum zone of inhibition (11.5 mm) at 4 mg/mL. The order of the antibacterial activity of BiVO 4 nanorods was found as B. subtilis > K. pneumoniae > S. epidermidis > P. aeruginosa > E. coli. Interestingly for P. aeruginosa and S. epidermidis, the observed zone of inhibition was much larger than the positive control Erythromycin at the rate of 1 mg/mL. Similarly, against K. pneumoniae, the BiVO 4 nanorods were found to be as effective as the positive control. Among the five tested fungal strains, F. solani was observed as most susceptible fungal strain revealing zones ranging from 13 to 5.7 mm at the tested concentrations of BiVO 4 nanorods. The order by which the antifungal potential observed was F. solani > A. niger > Mucor sp. > A. fumigatus > A. flavus. A. niger did not revealed any zones at 500 µg/mL or below, while A. fumigatus was found to be in effective at 250 µg/mL. Against F. solani and A. niger, the zones were revealed similar to the zones obtained from positive control (Amp B). Results of antifungal activity are summarized in Fig. 5b. Figure 6 indicate various selected images of the antibacterial and antifungal activities. Moreover, these activities revealed a dose dependent response. Protein kinase inhibition A simple assay based on the Streptomyces 85 E strain is used to screen PK inhibitors. Figure 7a, b indicate the protein kinase inhibition potential of the H. thebaica mediated BiVO 4 nanorods. Excellent PK inhibition was revealed. The zones of inhibition at the tested concentration ranged from 13 to 8 mm. However, the zones of inhibition was much smaller then obtained for positive control. Antioxidant assays The antioxidant potential of the BiVO 4 nanorods was determined using DPPH free radical scavenging and total antioxidant capacity. Moderate free radical scavenging potential is reported. At the highest tested concentration of 400 µg/mL, the percent scavenging was found to be 48%, which gradually declined as the concentration was lowered below 400 µg/mL. At the lowest tested concentration (25 µg/mL) of BiVO 4 nanorods, 29% scavenging was observed. These results were complemented by the total antioxidant capacity which was determined as µg AAE/mg. Highest value (59.8 µg AAE/mg) of the ascorbic acid equivalents (AAE) was reported at 400 µg/mL while at lowest concentrations of 25 µg/mL, 26.9 µg AAE/mg was reported. Overall the antioxidant activity can be concluded as moderate and dose dependent. Results of antioxidant potential is indicated in Fig. 7c. Hemolysis Erythrocytes lysis assay was performed to evaluate the toxicity of BiVO 4 on fresh isolated RBCs in test concentrations ranging from 600 to 12.5 µg/mL. The BiVO 4 nanorods were observed to cause increased degree of hemolysis (75%) at higher concentrations 600 µg/ mL, while percent hemolytic potential decreased with decrease in concentration. At lowest tested concentration Antiviral activity of BiVO 4 In order to investigate the antiviral activity of BiVO 4 , three concentrations (1TCID 50 , 10TCID 50 and 100TCID 50 ) of Sabin like poliovirus (Type 1) were incubated with BiVO 4 NPs (15 µg/mL). Our results indicated that cells remained viable at 24 h post-infection. At 5th day of incubation, it was observed that most Hep2C cells were destroyed at viral concentration of 100TCID50, 10TCID 50 and 1TCID 50 across all the tested concentrations of BiVO 4 NPs. It can be inferred that the BiVO 4 nanorods were unable to inhibit the propagation of polio virus in the Hep2C cells. Complete destruction of the Hep2C cells was due to the intracellular propagation of the polio virus in Hep2C, cultured with 15 µg/mL of the BiVO 4 nanorods. Discussion The interface of green nanotechnologies and medicinal plants have delivered excellent results over the previous decades. A number of biogenic metal based nanoparticles has revealed excellent results (Sathiyavimal et al. 2018). Green synthesized nanoparticles often exhibit multifunctional nature and therefore can be applied in diverse applications (Nasar et al. 2019;Venugopal et al. 2017). The interesting properties and potential applications of BiVO 4 has fueled the growing research on their synthesis procedures which easy, scalable, green and cost effective. Different chemical and physical processes have been adopted for the synthesis of BiVO 4 , however, the potential of biological resources in their synthesis is largely untapped. Recently, we have established the successful synthesis of BiVO 4 by using Callistemon viminalis floral extracts as bioreductant (Mohamed et al. 2018). Herein, a further detailed study was conducted on the physical as well as biological properties of BiVO 4 nanorods, synthesized using the fruit extracts of H. thebaica. Plant extracts are reported to have a rich chemistry which has the tendency to catalyze redox reactions and subsequently stabilize the nanoparticles. The phytochemicals that usually take part in the reduction are mostly considered to be phenols, flavonoids, citric acid, membrane proteins, reductases, dehydrogenases etc. while the stabilizing moieties can be tannic acids, extracellular proteins, peptides, enzymes (Karatoprak et al. 2017;Akhtar et al. 2013;Elegbede et al. 2018 (Gawande and Thakare 2012;Sivakumar et al. 2015). In and Clinobisvanite (monoclinic). Among these mineral forms, Clinobisvanite is most stable thermodynamically and possess significant photocatalytic potential (Zhao et al. 2011). Depending on the conditions, ferroelastic monoclonal-tetragonal-phase transitions are reported (Frost et al. 2006). The elemental analysis confirms the presence of "Bi", "V" and "O" which establishes the synthesis of BiVO 4 . The infrared spectra of the synthesized nanorods affirms the potential role of phenolic components in plant extracts that have catalyzed the reduction and stabilization of BiVO 4 nanorods. The role of phenolic compounds as reducing agents is well established (Ovais To date, most of the studies revealing the antimicrobial potential of BiVO 4 considered only the waste water disinfection. Recently, an innovative photocatalytic fabricated by Ni doping on BiVO 4 revealed excellent degradation of ibuprofen (80%) within 90 min, while 92% reduction of E. coli after 5 h exposure to light was recorded. In addition, Ni-BiVO 4 indicated excellent anti algal potential (Regmi et al. 2017). A novel BiVO 4 /InVO 4 nanocomposite material revealed excellent sterilization potential against various bacterial strains i.e. E. coli (99.71%), S. aureus (99.55%), P. aeruginosa (99.54%) and A. carterae (96%) (Zhang et al. 2019). In a recent report, graphene based nanocomposite of BiVO 4 (90 mg/L) was studied for antibacterial potential against B. subtilis and S. aureus using disc diffusion assay with, but no zone of inhibition was observed, suggesting a nontoxic nature of the BiVO 4 -GO nanocomposite (Zhao et al. 2019). Our work describe for the first time the antimicrobial potential of the phytosynthesized BiVO 4 . The physiochemical nature of the nanorods (surface coating, reducing-stabilizing agents, shape, size, surface morphology) plays important role in determining the antimicrobial activities (Zhang et al. 2016). The mechanism that drives the antimicrobial potential of the metal nanoparticles has mostly been attributed to the generation of reactive oxygen species. The present age of antibiotic resistance signifies the need to develop alternative antibiotics. The microorganisms tends to smartly evolve in order to develop resistance to the available treatments at a speedy rate. Furthermore, new antibiotics are not produced at the same pace at which microorganisms are getting resistant. Novel approaches like nanoantibiotics are considered vital to curb antibiotic resistance. BiVO 4 nanorods have indicated excellent antimicrobial activities and therefore can be considered as a novel nanoantibiotics for future, before detailed evaluation of toxicity. The inhibition of protein kinase enzymes is considered a popular target for the anticancer therapies. Therefore, tremendous research has been devoted for identifying potent inhibitors of PK enzymes. Protein kinase are responsible for phosphorylating serine-threonine and tyrosine amino acids and play integral role in signaling differentiation and division of cells. The malfunctioned phosphorylation leads to the progression of cancer. By inhibiting the protein kinase that serve as a bridge for the signaling factors, cell division can be stopped ultimately hindering cancer progression. PK enzymes are vital for the growth of hyphae in Streptomyces 85 E strain and therefore, considered as a model organism. The cell culture experiments suggested the viability of the cells at low concentrations of the BiVO 4 nanorods. With the advances of the metal nanoparticles research medicinal plants have emerged as an exciting resource to be explored from green synthesis aspects. We have reported the biosynthesis of BiVO 4 nanorods using H. thebaica fruit extracts as a low cost and green templating agents and studied them for possible biological applications. Excellent antibacterial and antifungal activities are reported. BiVO 4 nanorods were most effective on Bacillus subtilis and Fusarium solani. Good protein kinase inhibition and antioxidant potential is revealed. The BiVO 4 induced hemolysis at high concentrations. At low concentrations, the cell culture experiments revealed compatibility and non-toxicity. No potential antiviral activity was identified for BiVO 4 . Green synthesis using medicinal plant extracts provides an excellent platform for assembling nanomaterials for different applications. The process is not only economical but converging evidence suggests enhanced compatibility of the biosynthesized nanoparticles making them ideal for nanomedicinal applications. Most of the work in this area has been dedicated to the silver and gold nanoparticles and their nanomedicinal applications which necessitates the need of extending this methodology to novel nanomaterials. BiVO 4 has diverse applications in industries and further research is encouraged to use to different plant extracts to synthesize BiVO 4 and explore their biomedical potential.
2019-12-13T16:19:41.172Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "1258c3f2fc467bce5f3f39e7906989354c174bae", "oa_license": "CCBY", "oa_url": "https://amb-express.springeropen.com/track/pdf/10.1186/s13568-019-0923-1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8969ecade6232b89ce82b03a10ab1bd6a1979510", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
36559496
pes2o/s2orc
v3-fos-license
Development of Sliding Mode Controller for a Modified Boost Cuk Converter Configuration : This paper introduces a sliding mode control (SMC)-based equivalent control method to a novel high output gain ´Cuk converter. An additional inductor and capacitor improves the efficiency and output gain of the classical ´Cuk converter. Classical proportional integral (PI) controllers are widely used in direct current to direct current (DC-DC) converters. However, it is a very challenging task to design a single PI controller operating in different loads and disturbances. An SMC-based equivalent control method which achieves a robust operation in a wide operation range is also proposed. Switching frequency is kept constant in appropriate intervals at different loading and disturbance conditions by implementing a dynamic hysteresis control method. Numerical simulations conducted in MATLAB/Simulink confirm the accuracy of analysis of high output gain modified ´Cuk converter. In addition, the proposed equivalent control method is validated in different perturbations to demonstrate robust operation in wide operation range. Introduction Direct current to direct current (DC-DC) converters play a vital role in electrical systems due to the increasing penetration of renewable sources in electrical networks.In addition to high efficiency and reliability requirements, robust performance of the converter in a wide operating range is of great importance, since DC-DC converters are also used in diverse special-purpose applications, such as electrical vehicles, DC motor drives, and telecommunication systems. Different DC-DC converter topologies can be encountered in the literature.Classical converter topologies suffer from the lack of voltage gain ratio.Higher output voltage gain ratio with improved efficiency increases the performance of the converter, which is especially crucial for solar applications [1].Diverse DC-DC converter topologies are proposed in [2][3][4][5][6][7][8][9] to improve the voltage gain ratio and efficiency.Important voltage lift methods are also reviewed and compared in [10]. A DC-DC converter circuit topology must be upgraded for higher voltage output gain and improved efficiency, lowering the conduction losses, designing a smaller size converter, and minimizing voltage and current stress on the semiconductor switch.In addition to circuit modification for achieving the above goals, controller structure is also of great importance to improve the performance, robustness, and reliability in a wide operation range.Unfortunately, these converters are still bottlenecked in terms of system reliability and performance [1].In addition to circuit and controller design requirements, the availability and reliability of a complete system in harsh environments is also an important task to be considered.The studies given in [11][12][13][14] outline the harsh environment requirements of electronic circuits and implement different types of electronic circuit applications for automotive systems. High performance control of a DC-DC converter is a challenge for both control engineering and power electronics practitioners due to the highly nonlinear nature of DC-DC converters.Furthermore, fast response in terms of rejection of load variations, input voltage disturbances, and parameter uncertainties is mandatory for robust operation. A Ćuk converter is a kind of buck/boost converter topology; the inverted output is either lower or higher than the input voltage.Different modifications are applied to classical Ćuk circuit [15,16] to enhance the performance.Modeling and control of Ćuk converters has been investigated with different approaches.Linear methods [17,18] and proportional integral (PI) controllers [19] are well-known design procedures with ease of implementation.However, these classical methods do not guarantee the stability and high performance in different perturbations due to highly nonlinear behavior of Ćuk converters.Thus, different nonlinear control algorithms are also implemented in Ćuk converters to overcome this drawback, such as passivity-based control [20], neural networks [21], direct control methods [22], fuzzy logic [23], and sliding mode control (SMC) [24]. SMC for variable structure systems [25] is a robust control method of nonlinear systems due to its insensitivity to parameter variations, fast dynamic response, and ease of implementation.SMC was first applied to DC-DC converters in [26,27], and many diverse implementation examples are available in [27].Design criteria for SMC application to DC-DC converters is outlined in [28].SMC-based equivalent controllers are applied to buck/boost and Ćuk converter topologies in [29,30].However, SMC is not popularly implemented in DC-DC converters due to its unavailability of integrated circuit forms for power electronic applications.Moreover, its variable switching frequency (SF) behavior depending on the converter parameters and operation regions complicates electromagnetic interference filter design and practical implementation.A scheme given in [31] outlines the SF fixing and reduction methods in SMC applications.In addition, it is known that DC-DC converters are unwanted noise generators, and this problem can be overcome with fixed frequency operation [28]. Different control techniques have been proposed to achieve constant SF operation to DC-DC converters.An equivalent controller is designed and the output of the controller is compared with a saw-tooth signal to fix the SF in [32].Frequency locking techniques are applied in [33] to achieve constant SF operation of SMC for buck converter.An analog circuit design perspective for fixed frequency operation of SMC is given in [34].Dynamic hysteresis control [35,36] is another contribution which is commonly used for fixed SF operation. This study aims to improve the output voltage gain of a Ćuk converter circuit by inclusion of a single inductor and capacitor.The efficiency of the overall system is increased, and it is verified that voltage transformation ratio (V o /V i ) is increased to 1/(1 − δ), just as in classical boost converters, where δ is the duty ratio of the converter.The proposed model is mathematically analyzed, and numerical simulations conducted on MATLAB/Simulink validate the accuracy of the analysis. Moreover, an SMC-based cascaded equivalent controller is implemented for robust operation of the proposed converter.The general structure of SMC for DC-DC converters consists of external voltage controllers to achieve the desired output voltage requirements, and inner SMC performs the control of input current [26].In general, a PI controller is sufficient for voltage requirements.Therefore, cascaded PI+SMC structure achieves robust operation of a novel high output gain Ćuk converter in a wide operation range.Constant SF operation is achieved at different loading and disturbance conditions by using a simple dynamic hysteresis controller.The control algorithm is implemented in the MATLAB/Simulink environment in different scenarios: (1) A high value of output reference voltage step; (2) Output resistance variation; (3) Input voltage drop; (4) Input inductor parameter variation.The proposed method effectively achieves performance goals for all aforementioned perturbations. High Output Gain Modified Ćuk Converter The developed Ćuk converter is depicted in Figure 1.A classical Ćuk converter was modified with an additional inductor (L 3 ) and capacitor (C 2 ). Figure 2a The developed Ćuk converter is depicted in Figure 1.A classical Ćuk converter was modified with an additional inductor (L3) and capacitor (C2).Figure 2a and b provides the equivalent circuit representation of the modified Ćuk converter with the semiconductor switch S turned ON and OFF, respectively.When the switch S is turned ON and OFF, the following inductance voltage equations can be written to the circuit over one period for steady-state conditions.When the S is ON: When the S is OFF: According to Faraday's Law, the average voltage across an inductor is zero at steady-state.Hence, the voltage gain ratio of the converter can be obtained by starting the commonly used equation given below. The term δ means the duty ratio of the switch S. Equation (3) will be written for L1, L2, and L3, and required voltage gain ratio equation will be obtained. First, (3) is written for L1, and the equation given below is obtained; The above Equation (4) can be simplified as: Second, (3) can be written for L2 as given below: Equation ( 6) can be arranged as given below: The developed Ćuk converter is depicted in Figure 1.A classical Ćuk converter was modified with an additional inductor (L3) and capacitor (C2).Figure 2a and b provides the equivalent circuit representation of the modified Ćuk converter with the semiconductor switch S turned ON and OFF, respectively.When the switch S is turned ON and OFF, the following inductance voltage equations can be written to the circuit over one period for steady-state conditions.When the S is ON: When the S is OFF: According to Faraday's Law, the average voltage across an inductor is zero at steady-state.Hence, the voltage gain ratio of the converter can be obtained by starting the commonly used equation given below. The term δ means the duty ratio of the switch S. Equation (3) will be written for L1, L2, and L3, and required voltage gain ratio equation will be obtained. First, (3) is written for L1, and the equation given below is obtained; The above Equation (4) can be simplified as: Second, (3) can be written for L2 as given below: Equation ( 6) can be arranged as given below: When the switch S is turned ON and OFF, the following inductance voltage equations can be written to the circuit over one period for steady-state conditions.When the S is ON: When the S is OFF: According to Faraday's Law, the average voltage across an inductor is zero at steady-state.Hence, the voltage gain ratio of the converter can be obtained by starting the commonly used equation given below. The term δ means the duty ratio of the switch S. Equation (3) will be written for L 1 , L 2 , and L 3 , and required voltage gain ratio equation will be obtained. First, (3) is written for L 1 , and the equation given below is obtained; Energies 2017, 10, 1513 4 of 14 The above Equation ( 4) can be simplified as: Second, (3) can be written for L 2 as given below: Equation ( 6) can be arranged as given below: Finally, (3) can be written for L 3 : This simplifies to: If ( 5), (7), and ( 9) are combined, the duty ratio of the converter can be obtained.If ( 9) is inserted into If ( 10) is inserted into (5), the duty ratio of the system can be finalized. A sample design circuit can be conducted by using the circuit parameters given in Table 1 in MATLAB/Simulink.Different δ values are applied in the simulation, as shown in Figure 3c.Output voltage (V o ) and input current (i i ) curves change accordingly, as shown in Figure 3a,b, respectively.Output voltage and input currents are zoomed; it is observed in simulations that the frequency of the ripples is equal to the SF (150 kHz). Symbol Quantity Unit The performance of the modified Ćuk converter was compared to classical Ćuk and buck/boost converter circuits.Figure 4a shows δ comparison of converters.A simulation platform is constructed in MATLAB/Simulink with the same parameters given in Table 1.Theoretical and simulation values of the modified Ćuk converter validate the results.Efficiency comparison of simulated buck/boost, Ćuk, and modified Ćuk converter is depicted in Figure 4b.Modified Ćuk converter efficiency is higher than classical Ćuk and buck/boost converter.It can be stated that the proposed modified Ćuk converter produces higher efficiency due to the inclusion of additional passive elements.This reduces several parasitic effects and switching/conduction losses and increases voltage gain ratio, as emphasized in [37].The performance of the modified Ćuk converter was compared to classical Ćuk and buck/boost converter circuits.Figure 4a shows δ comparison of converters.A simulation platform is constructed in MATLAB/Simulink with the same parameters given in Table 1.Theoretical and simulation values Equivalent Control of Modified Ćuk Converter A cascaded PI+SMC controller structure could be used for ease of implementation to modified Ćuk converter as depicted in Figure 5.A simple external voltage controller can generate input current reference, while equivalent controller controls the input current [29,30].Although the modified Ćuk converter is a third-order nonlinear model, only an input current equation is required to construct an equivalent controller.This is the main advantage of SMC-based equivalent controllers, since the performance is independent of all system dynamics and parameter variations.Input current of the modified Ćuk converter in terms of Kirchhoff's voltage law can be written in the following form: Equivalent Control of Modified Ćuk Converter A cascaded PI+SMC controller structure could be used for ease of implementation to modified Ćuk converter as depicted in Figure 5.A simple external voltage controller can generate input current reference, while equivalent controller controls the input current [29,30]. Equivalent Control of Modified Ćuk Converter A cascaded PI+SMC controller structure could be used for ease of implementation to modified Ćuk converter as depicted in Figure 5.A simple external voltage controller can generate input current reference, while equivalent controller controls the input current [29,30].Although the modified Ćuk converter is a third-order nonlinear model, only an input current equation is required to construct an equivalent controller.This is the main advantage of SMC-based equivalent controllers, since the performance is independent of all system dynamics and parameter variations.Input current of the modified Ćuk converter in terms of Kirchhoff's voltage law can be written in the following form: 1 Although the modified Ćuk converter is a third-order nonlinear model, only an input current equation is required to construct an equivalent controller.This is the main advantage of SMC-based equivalent controllers, since the performance is independent of all system dynamics and parameter variations.Input current of the modified Ćuk converter in terms of Kirchhoff's voltage law can be written in the following form: The terms V i and V c1 are input C 1 voltages, and u is the switching signal of semiconductor switch as explained below. The external PI controller aims to achieve the reference voltage target, and the output of the PI controller acts as reference current (i * ).The internal SMC-based equivalent current controller aims to track current trajectory, and the analytical design procedure is detailed below.The switching surface can be given as [29,30]: The time derivative of the switching surface is: If the input current equation of the modified Ćuk converter in ( 12) is written to derivative of the switching surface in Equation ( 15), the following equation can be obtained: . σ is assumed to be zero at steady-state, the equivalent control signal can be generated as given below. The switching surface can be simplified as given below, considering . σ is zero at steady-state.The continuous function u eq will be converted into discontinuous form as follows: . σ = u − u eq (18) Closed loop control signal from switching surface can be given as: The term K is positive definite control gain, and if (19) is inserted in (18), the switching surface can be written as follows: where ûeq is the estimated equivalent control input.It can be written in steady-state that ûeq = u eq . . Energies 2017, 10, 1513 7 of 14 Finally, stability and existing conditions for sliding mode control must be clarified [18].The definition .σσ < 0 must be satisfied, and it can be derived from ( 21) that; Thus, the stability of the sliding surface is satisfied.Controller structure can be constructed by estimating ûeq .Estimation of the equivalent control can be formed as: Where the term l is the filter gain of the estimator.It is assumed that ûeq is constant, and the time derivative of ( 23) can be written as given below: . The time derivative of . It can be stated that ûeq = u eq in steady state: . If ( 26) is written in the form of ûeq = v − lσ, the following equation can be obtained [29,30]: Finally, the simple equivalent controller structure in Figure 6 can be obtained from the definitions written above. Energies 2017, 10, 1513 7 of 13 Finally, the simple equivalent controller structure in Figure 6 can be obtained from the definitions written above. As detailed in [25,29,30], Figure 6 shows that a dynamic system can be formed as a series of integrators, and it can be assumed that the output of this system can be estimated by an upper bound of the integral.Finally, a dynamic relay with hysteresis function as given in Figure 7 can be applied to control a signal to generate a sliding surface which oscillates with the magnitude of M. The main disadvantage of the SMC-based equivalent controller is variable switching frequency (SF), because the magnitude of oscillations in sliding surface is highly dependent on circuit parameters and operating conditions due to the nonlinear behavior of the converter.One of the methods that can constrain the sliding surface to constant SF operation can be a dynamic hysteresis controller [35] which dynamically changes the magnitude of sliding surface σ according to the desired switching frequency value.An additional PI controller which intermittently operates to bring As detailed in [25,29,30], Figure 6 shows that a dynamic system can be formed as a series of integrators, and it can be assumed that the output of this system can be estimated by an upper bound of the integral.Finally, a dynamic relay with hysteresis function as given in Figure 7 can be applied to control a signal to generate a sliding surface which oscillates with the magnitude of M. The main disadvantage of the SMC-based equivalent controller is variable switching frequency (SF), because the magnitude of oscillations in sliding surface is highly dependent on circuit parameters and operating conditions due to the nonlinear behavior of the converter.One of the methods that can constrain the sliding surface to constant SF operation can be a dynamic hysteresis controller [35] which dynamically changes the magnitude of sliding surface σ according to the desired switching frequency value.An additional PI controller which intermittently operates to bring back the SF to the desired value can be a simple and practical solution.The output of the PI controller dynamically changes the hysteresis of dynamic relay (M).Thus, SF can settle to a desired interval accordingly. Another drawback of the method is the requirement of the SF measurement.A SF measurement algorithm could be implemented by counting the rising edge of the gate signals at certain instants.If the number of rising signals is divided into a predefined time interval, SF can be easily calculated.As a result, an intermittent PI controller structure can keep the SF constant at a specified interval as shown in Figure 8.The output of the intermittent PI controller is the resultant M value of the dynamic relay with hysteresis function.The main disadvantage of the SMC-based equivalent controller is variable switching frequency (SF), because the magnitude of oscillations in sliding surface is highly dependent on circuit parameters and operating conditions due to the nonlinear behavior of the converter.One of the methods that can constrain the sliding surface to constant SF operation can be a dynamic hysteresis controller [35] which dynamically changes the magnitude of sliding surface σ according to the desired switching frequency value.An additional PI controller which intermittently operates to bring back the SF to the desired value can be a simple and practical solution.The output of the PI controller dynamically changes the hysteresis of dynamic relay (M).Thus, SF can settle to a desired interval accordingly. Another drawback of the method is the requirement of the SF measurement.A SF measurement algorithm could be implemented by counting the rising edge of the gate signals at certain instants.If the number of rising signals is divided into a predefined time interval, SF can be easily calculated.As a result, an intermittent PI controller structure can keep the SF constant at a specified interval as shown in Figure 8.The output of the intermittent PI controller is the resultant M value of the dynamic relay with hysteresis function.The main disadvantage of the SMC-based equivalent controller is variable switching frequency (SF), because the magnitude of oscillations in sliding surface is highly dependent on circuit parameters and operating conditions due to the nonlinear behavior of the converter.One of the methods that can constrain the sliding surface to constant SF operation can be a dynamic hysteresis controller [35] which dynamically changes the magnitude of sliding surface σ according to the desired switching frequency value.An additional PI controller which intermittently operates to bring back the SF to the desired value can be a simple and practical solution.The output of the PI controller dynamically changes the hysteresis of dynamic relay (M).Thus, SF can settle to a desired interval accordingly. Another drawback of the method is the requirement of the SF measurement.A SF measurement algorithm could be implemented by counting the rising edge of the gate signals at certain instants.If the number of rising signals is divided into a predefined time interval, SF can be easily calculated.As a result, an intermittent PI controller structure can keep the SF constant at a specified interval as shown in Figure 8.The output of the intermittent PI controller is the resultant M value of the dynamic relay with hysteresis function. Simulation Results Four different scenarios are implemented in a single simulation in MATLAB/Simulink SimPowerSystem platform at different time instants.Variable Step Ode23tb (stiff/TR-BDF2) solver is used in simulations.Circuit parameters given in Table 1 are used in simulation.Stable gains for external PI controller and equivalent controller are given in Table 2. Standard Routh-Hurwitz criterion for determination of PI gain values is omitted in this paper, and all gain values are determined using trial-error methods.For further details of Routh-Hurwitz criterion for external PI controllers, one can refer to [24].The target SF value was selected as 150 kHz, and intermittent PI hysteresis controller was only enabled when the absolute value of error exceeded 10, as shown in Figure 8.It is not possible to realize a precise controller for SF control in all perturbations due to unexpected oscillations in M value. Applied step and disturbance instants are given below.Figure 9 shows the output voltage and input current response at different perturbations.Required trajectories are successfully tracked at all disturbances.Figure 9a shows the output voltage trajectory at all peak values at the instants of perturbations.The controller successfully passed all perturbations.Figure 9b,c shows the input and output currents of the converter.All ripple values are zoomed, and matches the design circuit results of Figure 3. Standard Routh-Hurwitz criterion for determination of PI gain values is omitted in this paper, and all gain values are determined using trial-error methods.For further details of Routh-Hurwitz criterion for external PI controllers, one can refer to [24].The target SF value was selected as 150 kHz, and intermittent PI hysteresis controller was only enabled when the absolute value of error exceeded 10, as shown in Figure 8.It is not possible to realize a precise controller for SF control in all perturbations due to unexpected oscillations in M value. KI of PI Applied step and disturbance instants are given below.Figure 9 shows the output voltage and input current response at different perturbations.Required trajectories are successfully tracked at all disturbances.Figure 9a shows the output voltage trajectory at all peak values at the instants of perturbations.The controller successfully passed all perturbations.Figure 9b,c shows the input and output currents of the converter.All ripple values are zoomed, and matches the design circuit results of Figure 3. Figure 10 shows the performance of the intermittent hysteresis controller.Figure 10a shows the load resistance variation to validate the applied load variation.Figure 10b shows the SF variations at all perturbations.Target SF is achieved at all different perturbations by changing M value as shown in Figure 10c.SF exceeds the target value at transient conditions, but settles to target interval at all steady-state instants of perturbations. Figure 11 shows the output voltage and input current ripples at the instants of semiconductor ON and OFF states in simulation.Figure 11a shows the instants of the gate signal, and Figure 11b shows the control signal u generated by the SMC-based equivalent controller.Figure 11c,d show the ripple contents of output voltage and input current.The ripple values verify the ripple values at design circuit simulation in Figure 3. steady-state instants of perturbations. Figure 11 shows the output voltage and input current ripples at the instants of semiconductor ON and OFF states in simulation.Figure 11a shows the instants of the gate signal, and Figure 11b shows the control signal u generated by the SMC-based equivalent controller.Figure 11c,d show the ripple contents of output voltage and input current.The ripple values verify the ripple values at design circuit simulation in Figure 3.A comparison between classical linear control methods and the proposed equivalent controller was also attempted to show the effectiveness of proposed method.A cascaded controller structure all perturbations.Target SF is achieved at all different perturbations by changing M value as shown in Figure 10c.SF exceeds the target value at transient conditions, but settles to target interval at all steady-state instants of perturbations. Figure 11 shows the output voltage and input current ripples at the instants of semiconductor ON and OFF states in simulation.Figure 11a shows the instants of the gate signal, and Figure 11b shows the control signal u generated by the SMC-based equivalent controller.Figure 11c,d show the ripple contents of output voltage and input current.The ripple values verify the ripple values at design circuit simulation in Figure 3.A comparison between classical linear control methods and the proposed equivalent controller was also attempted to show the effectiveness of proposed method.A cascaded controller structure A comparison between classical linear control methods and the proposed equivalent controller was also attempted to show the effectiveness of proposed method.A cascaded controller structure which consists of voltage and current PI controllers is depicted in Figure 12.In particular, some studies use single voltage PI controllers to control a DC-DC converter.However, this type of controller has a very limited operational range, and comparison of a single PI controller would be inconsistent due to the cascaded controller structure of the proposed equivalent controller.An additional current controller introduces additional left half plane zeros to the controller and increases the performance of the controller structure.It is a difficult task to design a linear controller for a third-order modified Ćuk converter, and all states must be observed or measured.Perturbations applied to the controller are given below.0.03-0.06s: Output voltage reference is changed from −145 V to −45 V. 0.08-0.11s: Load resistance is decreased from 100 Ω to 85 Ω (15% load increase) 0.13-0.16s: Input voltage is decreased from 15 V to 12 V (20% input voltage dip) 0.18-0.21s: Input inductor (L 1 ) is decreased from 100 µH to 85 µH (15% L 1 reduction) 0.13-0.16s: Input voltage is decreased from 15 V to 12 V (20% input voltage dip) 0.18-0.21s: Input inductor (L1) is decreased from 100 μH to 85 μH (15% L1 reduction) Higher perturbations as applied to proposed SMC equivalent controller could not be accomplished due to stability problems.Maximum allowable δ is 0.9, and higher δ values could not be achieved.Therefore, −150 V output voltage reference could not be accomplished.PI controller gain values are optimized with trial and error methods, which are given in Table 3. Controllers are tuned at maximum allowable proportional and integral coefficients to achieve the highest dynamic performance.Higher proportional and integral gains could not be achieved due to higher oscillations in voltage output.Figure 12 shows the performance comparison of PI controller.Figure 12a shows that output voltage performance could not be achieved for all perturbations.Dynamic response is more sluggish compared to proposed SMC based equivalent controller, and load resistance change at 0.11th second causes steady state error and oscillations.Input voltage perturbation cannot be responded due to unavailability of higher δ than 0.9.Steady state error exists at the instant of input voltage perturbation due to sluggish dynamic performance and unavailability of higher δ. Figure 12b shows the responded input currents, and Figure 12c depicts the resultant δ. Finally, performance indices of the proposed controller structure at PI controller.Figure 13a shows that output voltage performance could not be achieved for all perturbations.Dynamic response is more sluggish compared to the proposed SMC-based equivalent controller, and load Higher perturbations as applied to proposed SMC equivalent controller could not be accomplished due to stability problems.Maximum allowable δ is 0.9, and higher δ values could not be achieved.Therefore, −150 V output voltage reference could not be accomplished. PI controller gain values are optimized with trial and error methods, which are given in Table 3. Controllers are tuned at maximum allowable proportional and integral coefficients to achieve the highest dynamic performance.Higher proportional and integral gains could not be achieved due to higher oscillations in voltage output.Figure 12 shows the performance comparison of PI controller.Figure 12a shows that output voltage performance could not be achieved for all perturbations.Dynamic response is more sluggish compared to proposed SMC based equivalent controller, and load resistance change at 0.11th second causes steady state error and oscillations.Input voltage perturbation cannot be responded due to unavailability of higher δ than 0.9.Steady state error exists at the instant of input voltage perturbation due to sluggish dynamic performance and unavailability of higher δ. Figure 12b shows the responded input currents, and Figure 12c depicts the resultant δ. Finally, performance indices of the proposed controller structure at PI controller.Figure 13a shows that output voltage performance could not be achieved for all perturbations.Dynamic response is more sluggish compared to the proposed SMC-based equivalent controller, and load resistance change at the 0.11th second causes steady-state error and oscillations.Input voltage perturbation cannot be responded due to the unavailability of higher δ than 0.9.Steady-state error exists at the instant of input voltage perturbation due to sluggish dynamic performance and unavailability of higher δ. Figure 13b shows the responded input currents, and Figure 13c depicts the resultant δ.All perturbations are summarized in Table 4. Maximum peak overshoots are outlined and it is shown that the controller passed high load impact values and other disturbances.The value of M is dynamically changed according to SF requirements at different conditions.Input current and output voltage ripples changed according to varying M value.unavailability of higher δ. Figure 13b shows the responded input currents, and Figure 13c depicts the resultant δ.All perturbations are summarized in Table 4. Maximum peak overshoots are outlined and it is shown that the controller passed high load impact values and other disturbances.The value of M is dynamically changed according to SF requirements at different conditions.Input current and output voltage ripples changed according to varying M value. Conclusions This paper proposed a modified high output gain Ćuk converter with an SMC-based equivalent controller.The efficiency and performance of a classical Ćuk converter was improved by the simple inclusion of a single inductor and capacitor.Moreover, a constant switching frequency cascaded equivalent controller structure is proposed.Simulation results show the effectiveness and robustness of the proposed method, and the constant switching frequency approach to the SMC-based controller provides the opportunity of simple application to real systems. Acknowledgments: No source of funding for this project. Author Contributions: All authors contributed and involved equally in framing the full version of the research article in its current form of decimation. Conclusions This paper proposed a modified high output gain Ćuk converter with an SMC-based equivalent controller.The efficiency and performance of a classical Ćuk converter was improved by the simple inclusion of a single inductor and capacitor.Moreover, a constant switching frequency cascaded equivalent controller structure is proposed.Simulation results show the effectiveness and robustness of the proposed method, and the constant switching frequency approach to the SMC-based controller provides the opportunity of simple application to real systems. and b provides the equivalent circuit representation of the modified Ćuk converter with the semiconductor switch S turned ON and OFF, respectively.Energies 2017, 10, 1513 3 of 13 Energies 2017, 10, 1513 5 of 13 of the modified Ćuk converter validate the results.Efficiency comparison of simulated buck/boost, Ćuk, and modified Ćuk converter is depicted in Figure 4b.Modified Ćuk converter efficiency is higher than classical Ćuk and buck/boost converter.It can be stated that the proposed modified Ćuk converter produces higher efficiency due to the inclusion of additional passive elements.This reduces several parasitic effects and switching/conduction losses and increases voltage gain ratio, as emphasized in [37].(a) (b) Table 1 . Design parameters of Modified Ćuk Converter. Table 2 . Controller parameters of modified Ćuk converter. Table 3 . Controller parameters of comparison PI controller. Table 3 . Controller parameters of comparison PI controller. Table 4 . Performance indices of modified Ćuk converter. o Peak Overshoot M i i Ripple (A) V o Ripple (V)
2017-07-15T01:33:33.936Z
2017-09-29T00:00:00.000
{ "year": 2017, "sha1": "2c5c236ccc4f20fb6ba08de03ee3e318c4fcdf5e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/10/10/1513/pdf?version=1506696865", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "575c11eed186e46b97ca7994ce3bcad8d443967a", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
254564996
pes2o/s2orc
v3-fos-license
Third dose of an mRNA COVID-19 vaccine for patients with multiple myeloma We have reported that IgG antibody responses following two mRNA COVID-19 vaccinations are impaired among patients with multiple myeloma (MM). In the current study, sixty-seven patients with MM were tested for anti-spike IgG antibodies 0–60 days prior to their first vaccination, 14–28 days following the second dose, and both before and 14–28 days after their third dose of the mRNA-1273 or BNT162b2 vaccines. After the first two doses, most patients' (93 %) antibody levels declined to ineffective levels (<250 BAU/mL) prior to their third dose (D3). D3 elicited responses in 84 % of patients (61 % full response and 22 % partial response). The third vaccination increased antibody levels (average = 370.4 BAU/mL; range, 1.0–8977.3 BAU/mL) relative to just prior to D3 (average = 25.0 BAU/mL; range, 1.0–683.8 BAU/mL) and achieved higher levels than peak levels after the first two doses (average = 144.8 BAU/mL; range, 1.0–4,284.1 BAU/mL). D3 response positively correlated with mRNA-1273, a > 10-fold change from baseline for the two-dose series, switching from BNT162b2 to mRNA-1273 for D3, and treatment with elotuzumab and an immunomodulatory agent. Lower antibody levels prior to D3, poorer overall response to first two doses, and ruxolitinib or anti-CD38 monoclonal antibody treatment negatively correlated with D3 response. Our results show encouraging activity of the third vaccine, even among patients who failed to respond to the first two vaccinations. The finding of specific factors that predict COVID-19 antibody levels will help advise patients and healthcare professionals on the likelihood of responses to further vaccinations. Introduction SARS-CoV-2 has infected over 500 million people and caused over 6 million deaths since its discovery in 2019 (Dong et al., 2020). Clinical trials have shown that mRNA vaccination for COVID-19 dramatically reduces the risk of severe disease and hospitalization among healthy individuals (Polack et al., 2020;Baden et al., 2021). However, immunocompromised individuals, such as those with multiple myeloma (MM), are less likely to derive protective humoral and cell-mediated immunity from these vaccines (Clem, 2011;Ehmsen et al., 2021) and are at risk for more significant complications from COVID-19 (Baek et al., 2021) (Table 1). MM is a hematological malignancy characterized by the presence of clonal plasma cells in the bone marrow that produce monoclonal antibodies (Liu et al., 2020). It is associated with a functional reduction in immune responses which results in a significantly increased risk of infection (Alemu et al., 2016;Tete et al., 2014) and less robust responses to vaccines (Ludwig et al., 2021). We and other groups have found that MM patients show lower anti-spike IgG antibody responses to COVID-19 mRNA vaccinations than healthy controls, (Stampfer et al., 2021;Agha et al., 2019) leaving these patients at significant risk of experiencing symptomatic breakthrough infections (Stampfer et al., 2022). This finding emphasizes the importance of regular serological monitoring of anti-spike IgG antibody levels for these patients following their ongoing COVID-19 vaccinations, which are likely to correlate with neutralizing antibody levels and T-cell responses (Sui et al., 2021;Earle et al., 2021). In our first report, (Stampfer et al., 2021) we stratified patients into clinical response categories based on studies showing mRNA vaccination achieving 94-95 % efficacy against mild COVID-19 cases in the context of the original strain (Polack et al., 2020;Baden et al., 2021). The spike antibody level of 250 BAU/mL was selected as the clinically relevant cutoff level as it was exceeded by 94 % of the control samples in our previous study. Consistent with the clinical relevance of this cutoff, a longitudinal study of 246 dental professionals who had showed that spike antibody levels of ≥ 147 IU/mL conferred complete protection against reinfection during a six-month follow-up (Shields et al., 2021). Specifically, patients were categorized into clinically relevant responders (>250 BAU/mL), partial responders (50-250 BAU/ mL), and non-responders (<50 BAU/mL). Only 45 % of MM patients achieved clinically relevant responses, and 22 % fell into the partial responder category. We also identified specific characteristics that determined patients' likelihood of responding to the first two vaccines which included both MM and vaccine related-factors, including mRNA vaccine type, age, and uninvolved immunoglobulin levels. Although the first two doses of mRNA COVID-19 vaccines are highly effective in preventing infection for up to 6 months, both COVID-19 antibody levels; and, as a result, the effectiveness of these vaccinations decrease dramatically after that time point (Goldberg et al., 2021;Levin et al., 2021;Rosenberg et al., 2021). As a result, booster vaccinations have been administered, and a recent large, placebo-controlled trial demonstrates the efficacy of a third dose of mRNA COVID19 vaccine in preventing this viral infection among adult volunteers (Moreira et al., 2022). A recent paper shows that approximately half of patients with solid and hematologic malignancies lose neutralizing antibodies against the current variants of concern by the six-month mark post-vaccine, and that this drop in antibody level is faster than what is observed in healthy individuals (Obeid et al., 2022). As a result, these patients likely need to receive booster vaccine doses more frequently than the general population. In the current study, we evaluated MM patients who received a third dose of a COVID-19 mRNA vaccine for anti-spike IgG antibodies. The purpose was to determine the response to a third vaccination and what factors might predict the level of response, including baseline levels prior to the third dose, response to the first two doses, mRNA vaccine type, and MM treatment regimen. Responses were categorized according to the three clinical response categories used in our previous study (Stampfer et al., 2021). Participants and eligibility Subjects included MM patients treated at a single center specializing in the treatment of these patients who had received three doses of a COVID-19 mRNA vaccination. Patients with a history of COVID-19 infection confirmed by positive nucleic acid testing were excluded from this study. Timepoints Anti-spike IgG antibody levels were determined 0-60 days before (D3-baseline) and 14-28 days after their third dose (D3W2). These values were compared to post-two-dose series levels drawn 14-28 days following the second dose (D2W2), and pre-vaccination baseline values drawn 0-60 days prior to the first vaccine dose (D0). Quantitative anti-Spike IgG ELISA The semiquantitative spike IgG ELISA assay [units in (BAU)/mL] used was the test described previously with 1000 BAU/mL matching the 20/136 NIBSC WHO convalescent plasma standard (Stampfer et al., 2021). Comparative analysis The reported measures of central tendency were median and geometric mean. Differences in antibody levels between time points for the same patient were analyzed using Wilcoxon signed rank test, while differences between patients were analyzed using Mann-Whitney Utests. The difference in the percentage of patients that responded better to D3 than dose 2 (D2) was analyzed using a z-test of proportions. To determine which variables were predictive of D3 response, we used a multivariate binary logistic regression model with stepwise AIC selection. Vaccine response was defined as > 250 IU/mL. All statistical analysis was done using GraphPad Prism 9 and R (San Diego, CA) and R (version 4.1.2). Results There were 67 MM patients who received three doses of mRNA-based COVID-19 vaccines (BNT162b2 or mRNA-1273) found to be eligible based on sample availability at all of the time points that were part of the study. Thirty received three doses of mRNA-1273, 18 received three doses of BNT162b2, 19 received two doses of BNT162b2 followed by one dose of mRNA-1273, and the median number of days between the second and third vaccine was 185 (range, 99-247 days). The median age was 70 years (range, 40-86 years), 57 % of patients were male, and 81 % were Caucasian. Active MM patients were being treated with a variety of treatment regimens. For prior regimens based on drug class, 88 %, 45 %, 34 %, 22 %, 13 %, and 12 % were or had previously been on a steroid containing regimen, immunomodulatory agent (IMiD), proteasome inhibitor (PI), anti-CD38 antibody, ruxolitinib (RUX), and elotuzumab (ELO), respectively. At the time of D3, the median number of prior lines of therapy was 2 (range, 0-15), with 15 % on frontline therapy, 75 % on salvage therapy, and 10 % untreated. With respect to disease status, 40 %, 18 %, 4 %, 13 %, 15 %, and for 9 % were in CR, PR, MR, SD, and PD, respectively, and 9 % were not evaluable. Antibody decline between second and prior to third dose of COVID-19 mRNA vaccination Prior to their third vaccination, 80 % (12/15) of patients who had a partial response (50-250 BAU/mL) to their first 2 doses and 42 % (13/ 31) of patients who had a full response (>250 BAU/mL), fell below 50 BAU/mL. Twenty percent (3/15) of patients who had a partial response and 42 % (13/31) of patients who had a full response showed partially protective levels between 50 and 250 BAU/mL at D3-baseline. Only 16 % (5/31) of patients who had a full response remained at > 250 BAU/mL at D3-baseline and only 20 % (3/15) of patients who had a partial response remained at their 50-250 BAU/mL level. All patients who were non-responders (n = 21) remained < 50 BAU/mL prior to their third vaccination (Fig. 1A). Longer duration between D2 and D3 was weakly associated with a greater drop in antibody levels from D2W2 to D3 baseline (spearman r = -0.3009; p = 0.0133). Third dose response Among all patients, 61 % (n = 41), 22 % (n = 15), and 16 % (n = 11) showed full response, partial response, and no response, respectively. For those who did not respond to the first two doses (n = 21), nearly half (10/21) of patients had no response to D3, whereas 38 % (8/21) achieved a partial response, and 14 % (3/21) a full response. One-third (5/ 15) of patients who had a partial response to the two-dose series had a similar partial response to D3, and the remaining 67 % (10/15) achieved a full response. Ninety percent (28/31) of patients who had a full response to the two-dose series also had a full response to D3, with only 6 % (2/31) showing a partial response and 3 % (1/31) with no response (Fig. 1B). A multivariate binary logistic regression model examined the effects of the following variables on D3 response: sex, age at D3, D2 and D3 mRNA vaccine types, treatment with ruxolitinib (RUX) or elotuzumab (ELO) with an immunomodulatory drug (IMiD) at D3, M− protein at D3baseline, and D2 response. Only D2 response was found to be positively correlated with D3 response. Multivariate analysis was performed excluding D2 response and including the following variables: sex, days from D1 to D3W2, D2 mRNA vaccine type, and treatment with RUX or ELO with an IMiD at D3. This second analysis found that a longer time interval between D1 and D3W2 was positively correlated with D3 response, and treatment with RUX and receiving BNT162b2 for doses one and two were negatively correlated with D3 response (Supplemental Fig. 1). Vaccine response by mRNA vaccine type (mRNA-1273 vs BNT162b2) When categorized according to the original mRNA vaccine type that the patient received for their first two doses of vaccination, the mRNA-1273 group had higher antibody levels than the BNT162b2 group at D2W2 (p = 0.0002), D3-baseline (p = 0.0332), and D3W2 (p = 0.0069), despite no significant difference at D0 (Fig. 3). Notably, antibody levels at D3-baseline were higher among those who had received mRNA-1273 (average = 39.0 BAU/mL; range, 1-683.8 BAU/mL) than among those who had been treated with BNT162b2 (average = 17.5 BAU/mL; range, 1-289.2 BAU/mL; p = 0.0328; Fig. 3). Based only on the type of mRNA vaccination patients received for their first two doses, including those who received BNT162b2 for their first two doses before switching to mRNA-1273, the median difference between average D3W2 and D2W2 levels among patients treated with the mRNA-1273 and BNT162b2 vaccines was 556.1 BAU/mL and 198.4 BAU/mL (p = 0.1767), respectively (Supplemental Fig. 2). Immunologic response among non-responders and partial responders Next, we compared poor relative responders (<10-fold increase from baseline antibody levels to D2W2) to good relative responders (>10-fold increase from baseline antibody levels to D2W2) among patients whose D2W2 antibody levels were < 250 BAU/mL. Most (85 %; [11/15]) of those who responded to the two-dose series with a > 10-fold change increase from pre-vaccination antibody levels to D2W2 antibody levels responded further to D3, whereas only 43 % (10/23) of those who had a < 10-fold change increase responded further (p = 0.0401) (Fig. 5). Patients who received BNT162b2 were further broken down into those who again received BNT162b2 for their third dose (P -P -P) (n = 18) and those who switched to mRNA-1273 (P -P -M) (n = 19), to compare their antibody levels at D0, D2W2, D3 baseline, and D3W2. Myeloma treatment-related effects on COVID-19 antibody response Patients were separated into five groups according to their current MM treatment: anti-CD38 monoclonal antibody (anti-CD38)-, RUX-, PI-, and ELO-containing therapies, and all other regimens not containing these drugs (no ELO, no RUX, no anti-CD38, no PI; Fig. 6). Patients with overlapping treatments were excluded. Patients in the no ELO, no RUX, no anti-CD38, no PI group exhibited an approximately 2-fold increase in antibody levels from D2 to D3 (average D2W2 = 281.0 BAU/mL; average D3W2 = 578.2 BAU/mL). Similarly, the PI-treated group exhibited an approximately 3-fold increase in antibody levels (average D2W2 = 234.5 BAU/mL; average D3W2 = 762.9 BAU/mL). The ELOtreated group exhibited a 4.5-fold increase (average D2W2 = 258.1 BAU/mL; average D3W2 = 1357.0 BAU/mL). Those treated with anti-CD38-containing treatments showed the lowest D2 antibody levels (average D2W2 = 42.2 BAU/mL), and their average D3 response was only 165.2 BAU/mL. RUX-treated patients had an average D2 response of only 102.4 BAU/mL and an even lower average D3 response of 82.6 BAU/mL. The same patients were then divided into three groups: ELOcontaining therapies with an IMiD, IMiD without ELO, and IMiD without RUX, anti-CD38, or ELO (Fig. 7). Patients with overlapping treatments were again excluded. Patients on PIs were not excluded from this third group as the previous data had already shown PIs to have little or no effect on vaccine response. The IMiD without ELO group had average D2 and D3 antibody levels of 118.4 BAU/mL and 292.3 BAU/ mL, respectively. The IMiD without ELO, RUX, or anti-CD38, group had average D2 and D3 responses of 169.0 BAU/mL and 399.6 BAU/mL, respectively. The ELO with IMiD group had average D2 and D3 levels of 271.2 BAU/mL and 1201.0 BAU/mL, respectively. Median lines of treatment were 3 for all three groups. Discussion Previous studies have shown that only a minority of patients with active MM show full responses to the first two doses of COVID-19 vaccination (Stampfer et al., 2021;Agha et al., 2019). Thus, it is important to determine the efficacy of a booster dose in this at-risk population and the factors that determine their response to this vaccination. Studies have recently shown that immunosuppressed populations demonstrate a marked decline in antibody levels in the months , there was a breakdown of the percentage of patients who achieved a D3 response that was greater than their D2 response and the percentage of patients who had a D3 response that was less than or equal to their D2 response. following the first two doses, and that administration of a third vaccination effectively boosts immunity (Obeid et al., 2022;Kamar et al., 2021;Hall et al., 2021;Greenberger et al., 2021). This has been observed even among those who do not respond to the two-dose series (Shapiro et al., 2022;Herishanu et al., 2022). In this study of MM patients, we have shown that anti-spike IgG antibodies markedly decline between the peak level achieved following their second dose and just prior to the administration of their third dose with only a small minority (7 % [5/67]) maintaining clinically effective antibody levels above 250 BAU/mL. Similarly, another study found the majority (78 %) of patients with a variety of B-cell malignancies to be seronegative prior to a third dose of an COVID-19 mRNA vaccine (Greenberger et al., 2021). Notably, nearly 2/3 rds of patients in that study showed increased antibody levels following their third dose. In the current study, peak levels after the third vaccination were higher than following the second dose for most patients but varied widely between them. This may indicate that most patients could generate an anamnestic response (Liu et al., 2022). Specifically, 61 % (41/67), 22 % (15/ 67), and 16 % (11/67) showed a full, partial, and no relative response, respectively following the third vaccination. Thus, the majority of patients (84 % [56/67]) derived some degree of protection from this third dose which was higher than after the first two doses (Stampfer et al., 2021). This is also consistent with findings in other immunosuppressed populations, with one study in the context of solid organ transplants finding a third dose to significantly improve the immunogenicity of mRNA COVID-19 vaccines (Kamar et al., 2021). Another study performed among transplant recipients found 71 % to exhibit antibody responses to a third dose, with overall levels rising significantly relative to prior to this third dose (Hall et al., 2021). Among seronegative patients prior to their third dose of COVID-19 vaccine, 56 % of cancer patients seroconverted following their third dose (Shapiro et al., 2022). Another study found that CLL patients who did not respond to the first two doses of BNT162b2 responded in approximately-one quarter of cases (Herishanu et al., 2022). Similarly, in the current study, more than half (52 % [11/21]) of patients who were below the 50 BAU/mL threshold defined as "non-responders" following their first two doses, displayed some degree of response to this third dose, though this was lower than the responses to the third dose among those who showed higher peak levels following the first two doses. This included the 22 % (15/67) of patients in the 50-250 BAU/mL category and the 46 % (31/67) of patients in the > 250 category following the first two doses. Specifically, all of these patients except one (98 % [60/ 61]) showed some degree of response to the third dose. This is particularly important as the magnitude of anti-spike IgG antibody response to vaccination correlates with other indicators of an immune response, as well as the degree of protection from COVID-19 (Hall et al., 2022;Salazar et al., 20202020;Vályi-Nagy et al., 2021). Factors related to these peak levels (D3W2) included the level of COVID-19 antibodies that was achieved following the first two vaccines, amounts of these antibodies just prior to the third vaccination, type of mRNA vaccination (mRNA-1273 versus BNT162b2), and type of myeloma treatment patients were receiving at the time of their third vaccination. Consistent with our study and others following the first two vaccinations, (Stampfer et al., 2021;Naranbhai et al., 20222022;Andrews et al., 2022) antibody levels among patients who received mRNA-1273 for all three doses were higher across all time points since receiving their second dose than the antibody levels of those who received BNT162b2 for all three doses (Fig. 3). COVID-19 antibody levels drawn just prior to D3 were also higher among those who received mRNA-1273 than BNT162b2-treated patients. This suggests that mRNA-1273-treated patients may maintain effective levels of antibody longer; and, thus, may not need to be boosted as often as BNT162b2 patients. In fact, a recent study found that the magnitude and duration of neutralizing antibodies in response to the mRNA-1273 vaccine was substantially greater and longer than to the BNT162b2 vaccine among patients with solid and hematologic cancers, solid organ transplants, or autoimmune diseases, as well as among healthy controls (Obeid et al., 2022). Although some patients who received BNT162b2 for their two-dose series selected mRNA-1273 as their booster, it made no significant difference in overall antibody level compared with continuing with BNT162b2 vaccination (Figs. 3 and 4). Individuals experiencing a more robust immune response to the two-dose series of mRNA-1273 may also have greater immunological memory, (Mairhofer et al., 2021) allowing for a more robust response to a third dose (Liu et al., 2022). An alternative explanation is that patients in the P -P -M group showed lower responses following D2 than those in the P-P-P group, though this difference was not significant. This is likely due to patients who had poorer responses being the ones who chose to switch to mRNA-1273 for their third vaccination. Patients who received two doses of BNT162b2 followed by mRNA-1273 did, however, achieve a greater fold change above their D2W2 peak level compared to patients who only received BNT162b2. A notable finding given that the P-P-M group had markedly lower D2W2 responses than P -P -P and would have been expected to have poorer D3W2 responses. A technical briefing from the UK Health Security Agency showed that recipients of BNT162b2 have higher vaccine efficacy against Omicron if they switch to mRNA-1273 for their booster dose, (Health Security, 2021) further supporting a change to mRNA-1273 vaccination for those who had received BNT162b2 for their first two vaccinations. MM treatment with PIs did not appear to affect vaccine response, as has been previously reported (Herzog Tzarfati et al., 2021). However, treatment with the anti-CD38 monoclonal antibodies daratumumab and isatuximab, the Janus kinase 1/2 (JAK1/2) inhibitor RUX, and the signaling lymphocytic activation molecule F7 (SLAMF7) monoclonal antibody ELO did show differences in antibody responses. Anti-CD38 antibody therapies such as daratumumab and isatuximab indiscriminately target the CD38 glycoprotein on both healthy and malignant plasma cells (Frerichs et al., 2020). Consequently, the destruction of healthy plasma cells from these treatments likely results in the patients' reduced response to COVID-19 vaccination due to decreased production of immunoglobulins. Consistent with the impaired antibody response to the third vaccine seen in our study, only a minority (42 %) of patients with MM receiving anti-CD38 treatment achieved clinically significant neutralizing antibodies to vaccination . In another study, 13-30-fold lower ELISA binding titers were observed among vaccinated patients with MM receiving anti-CD38 treatments compared to their counterparts on other treatment regimens . In contrast, ELO-treated patients showed higher antibody levels. This monoclonal antibody targets SLAMF7 on both MM and NK cells (Campbell et al., 2018). It activates NK cells, (Cox et al., 2021) which may explain the more robust antibody responses among ELO-treated patients. Similarly, IMiDs boost immune system activity via enhancement of T and NK cells, increased cytokine production, and enhancement of dendritic cells Costa et al., 2017). However, IMiDs alone do not necessarily improve vaccine responses, (Avivi et al., 2021) which was also seen in our study. Instead, it was the combination of ELO and an IMiD that appeared to enhance antibody responses. This is consistent with findings that the combination of lenalidomide with ELO allowed the latter to enhance humoral immunity via mDCs through increasing the immunostimulatory effects of IMiDs (Azuma et al., 2019). Our inclusion of RUX-treated patients in this study stems from both preclinical, Chen et al., 2020) and recent clinical studies demonstrating the efficacy of this JAK inhibitor for MM . JAK inhibitors have been associated with impaired cellular responses (McLornan et al., 2015). The impaired vaccine responses among RUX-treated MM patients are consistent with studies in myelofibrosis that found both reduced COVID-19 vaccine efficacy and reduced ability to respond promptly (Ikeda et al., 2022). Coincidentally, RUX and other JAK inhibitors have been shown to be effective for treating severe COVID-19 infections, potentially as a result of the anti-inflammatory effects of these drugs (Sarmiento et al., 2021). These results certainly provide support for further studies of the impact and timing of specific myeloma treatments on antibody responses to these vaccinations. Whether drug holidays or vaccination during specific parts of the treatment cycle in different myeloma therapies will help improve impaired antibody responses to COVID-19 vaccination is unknown. Conclusion This data highlights that monitoring MM patients' anti-spike IgG antibody levels before and after COVID-19 vaccinations can further our understanding of how to optimize the use of vaccines for this at-risk population. These results will help inform patients and healthcare professionals on the likelihood of responses to COVID-19 vaccinations, including boosters, and guide clinicians' use of these vaccines to treat MM patients. In addition, our results demonstrate that a third dose of a mRNA COVID-19 vaccine achieves responses for most patients, including among those who failed to respond to the first two vaccinations, and that the magnitude of these responses surpass those achieved by the first two doses. A limitation of this and similar studies is that protection against COVID-19 infection and its severity based on antibody levels following vaccination is likely to vary depending on the COVID-19 variant to which the patient is exposed. Ongoing studies with likely different vaccines and antibody tests will be necessary to help establish protective antibody levels as this virus continues to evolve. Ethical Approval Statement by the Authors: All procedures were performed in compliance with relevant laws and institutional guidelines. Informed consent was obtained for collection of blood from the patients in the form of a written consents. Patient privacy was always be observed with no identifying information included in this study. As this was a retrospective observational study, IRB approval was deemed unnecessary. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Cancer Research and by the National Institute of Allergy and Infectious Diseases under award T32AI074492. The authors would like to thank the patients for their contributions to the study and all healthcare professional staff involved in collecting clinical specimens. Data Sharing Statement For original data, please contact the corresponding author.
2022-12-13T14:04:53.444Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "9a680698dc3ee83b1cb25e6a7bf3711a3ccbd39c", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "682011c571c9816409c96eb3bf394a1b0ba3b0e3", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
2914358
pes2o/s2orc
v3-fos-license
The fifth leaf and spike organs of barley (Hordeum vulgare L.) display different physiological and metabolic responses to drought stress Background Photosynthetic organs of the cereal spike (ear) provide assimilate for grain filling, but their response to drought is poorly understood. In this study, we characterized the drought response of individual organs of the barley spike (awn, lemma, and palea) and compared them with a vegetative organ (fifth leaf). Understanding differences in physiological and metabolic responses between the leaf and spike organs during drought can help us develop high yielding cultivars for environments where terminal drought is prevalent. Results We exposed barley plants to drought by withholding water for 4 days at the grain filling stage and compared changes in: (1) relative water content (RWC), (2) osmotic potential (Ψs), (3) osmotic adjustment (OA), (4) gas exchange, and (5) metabolite content between organs. Drought reduced RWC and Ψs in all four organs, but the decrease in RWC was greater and there was a smaller change in Ψs in the fifth leaf than the spike organs. We detected evidence of OA in the awn, lemma, and palea, but not in the fifth leaf. Rates of gas exchange declined more rapidly in the fifth leaf than awn during drought. We identified 18 metabolites but, only ten metabolites accumulated significantly during drought in one or more organs. Among these, proline accumulated in all organs during drought while accumulation of the other metabolites varied between organs. This may suggest that each organ in the same plant uses a different set of osmolytes for drought resistance. Conclusions Our results suggest that photosynthetic organs of the barley spike maintain higher water content, greater osmotic adjustment, and higher rates of gas exchange than the leaf during drought. Electronic supplementary material The online version of this article (doi:10.1186/s12870-016-0922-1) contains supplementary material, which is available to authorized users. Background Drought reduces crop yield more than any other environmental factor [1,2]. Plants are particularly sensitive to drought during the reproductive stage of their life cycle [3][4][5]. Pre-anthesis drought can cause sterility and senescence of flowers [3] and post-anthesis drought can reduce seed size [6,7]. The effect of drought on cereal crops has been well-studied but most research has focused on vegetative structures (i.e., leaves). Comparatively little is known about the response of the photosynthetic organs in the spike (ear) to drought. The spike is an important supplier of assimilate for seed development [8][9][10]. Barley (Hordeum vulgare L.) is an important malting, food, and feed crop [11] and ranks fourth in global production among cereal crops behind corn, paddy rice, and wheat [12]. Because barley originated in a semi-arid region, known historically as the Fertile Crescent [13], it is relatively resistant to periods of water shortage [14]. Barley displays three strategies for coping with drought [15,16]: escape, avoidance, and tolerance. Varieties from regions characterized by terminal drought (drought at the reproductive stage) complete their life cycle before the onset of severe water deficit [17][18][19][20], which is consistent with a drought escape strategy [21,22]. By contrast, plants using a drought avoidance strategy maintain sufficient cellular hydration when water is scarce [21][22][23]. Common drought avoidance mechanisms in barley include minimizing water loss via stomatal control [24], production of extensive root system to extract soil moisture [25,26], and altering metabolism to accumulate compatible solutes (osmolytes) for osmotic adjustment [27,28]. Drought tolerant varieties maintain physiological functions at low tissue water potentials [21,22]. Typical drought tolerance mechanisms in barley include synthesis of proteins and compatible solutes to detoxify reactive oxygen species (ROS) and stabilize macromolecules and membranes [29][30][31][32] and mobilization of stem reserves (e.g., glucose, fructose, sucrose, and fructans) to supply carbon for grain filling [33][34][35][36]. These three contrasting strategies can also be used in combination [15], highlighting the complexity of drought response in barley and the challenges associated with developing cultivars for dry environments. The spike organs of barley (lemma, palea, and awn) are photosynthetically active and contribute as much as 76 % of the dry weight of the kernel [46][47][48]. Because of its larger size, the awn can account for up to 90 % of spike photosynthesis in barley under normal conditions [49]. The spike is resistant to drought and spike photosynthesis is particularly important for grain filling during shortages of water. The spike has several attributes that confer resistance to drought stress. Relative to the leaf, the spike has better CO 2 diffusive conductance during drought [9], suggesting efficient assimilation of CO 2 per unit of water transpired [9,50,51]. The spike has better osmotic adjustment [52], delayed senescence [53,54], a greater capacity to transport assimilate [54], and a photosynthetic metabolism suspected to be intermediate between C 3 and C 4 pathways [54]. Further, the lemma and palea tightly enclose the developing kernel and recycle respired CO 2 [9,51,53,55]. The significance of the spike for grain filling is amplified during drought [9,10,56] with some authors suggesting that spike photosynthesis can be used as a selection tool for developing drought resistant cereals [53,57,58]. Emerging evidence also suggests that the various organs of the barley spike respond differently to drought. Transcriptome analysis by our group found that drought alters expression of more genes in the awn than the lemma, palea, and kernel [59]. However, it is not clear whether these changes at the transcription level lead to accumulation of proteins and metabolites required for drought resistance. In this study, we examined whether metabolite accumulation in response to drought at the early stages of grain filling differs between the fifth (penultimate) leaf and spike organs (lemma, palea, awn) of barley using non-targeted metabolite profiling. We also compared the water status and gas exchange of these photosynthetic organs during drought. To our knowledge, this is the first study to compare physiological and metabolic changes in individual spike organs and leaf of barley in response to terminal drought. Understanding differences in physiological and metabolic responses between the leaf and spike organs during drought can help us develop better approaches to increase yield of cereals in environments where terminal drought is prevalent. Plant materials and growth conditions We used a six-row, drought tolerant [60] barley variety (Hordeum vulgare L. var. Giza 132) for this study. The seeds were obtained from the National Small Grains Collection of the United States Department of Agriculture, Aberdeen, Idaho. We grew plants in 2.5 L pots (16 cm top diameter × 12 cm bottom diameter × 17 cm height) filled with 800 g of soil (17 % topsoil, 50 % Canadian peat moss, 25 % vermiculite, and 8 % rice hulls). Before planting the seeds, the soil was saturated with water to a total weight of 1200 g. In each pot, we planted eight seeds, two cm deep, with the awn end up in an evenly-spaced, circular pattern. Then, 5 g of Osmocote® (Scotts Company LLC, Marysville, OH) slow release fertilizer (N-P-K 19-6-12) was added. All planting occurred between 0900 and 1000 CST (3-4 h into the photoperiod). We grew the plants in a controlled growth chamber (Conviron CMP-6050 connected to a Thermoflex 10,000 chiller) under conditions of 16 h photoperiod, 22°C days/ 18°C nights, and 60 % relative humidity. In the morning, we stepped up light intensity (219, 437, 656, and 715 μmoles m −2 sec −1 ) in half hour intervals and at the end of the day, we stepped down light intensity in the same manner. We fertilized each pot with 100 mL of 4 g/L Jack's Professional with magnesium (N-P-K 20-19-20) twice: (1) one week after planting and (2) two weeks before samples were collected. At Zadoks stage 12 (second leaf unfurled) [61], we thinned the number of seedlings to five per pot to ensure a uniform stand. For the first 3 weeks after planting, we watered all pots to a final weight of 1200 g every other day to promote seedling establishment. After 3 weeks, we watered all pots to a final weight of 1200 g daily until commencing the drought treatment. All watering occurred between 0900 and 1000 CST (3-4 h into the photoperiod. Drought treatment At Zadoks stage 71 (kernel watery ripe) [61], plants were randomly assigned to either the "control" group or the "stressed" group. Control pots were watered to 1200 g total weight each day. Plants in the stressed group were exposed to drought by withholding water for 4 days. More specifically, stressed pots were weighed each day and water was added to bring the weight of each pot to that of the heaviest stressed pot, which was 900 g (day 1), 790 g (day 2), 630 g (day 3), and 580 g (day 4). Experimental design We examined changes in water status (relative water content, osmotic potential, osmotic adjustment), gas exchange (photosynthesis and stomatal conductance), and metabolite content in the fifth (penultimate) leaf and spike organs of barley during drought. Measurements of relative water content (RWC), osmotic potential (Ψ s ), and gas exchange are based on three replicates (pots) using a completely randomized design. Specifically, we randomly selected one plant from three different pots for each treatment, measured gas exchange on the fifth leaf and awns, and then harvested the fifth leaf and spike organs (awn, lemma, palea) of that plant to quantify RWC. We repeated this protocol every day of the 4-day drought treatment using the remaining plants in each pot. Measurements of osmotic potential and metabolite accumulation are based on six replicates (blocks) using a randomized complete block design. The six replicates were planted on different days due to space limitation. We harvested the fifth leaf and spike organs (awn, lemma, palea) on the fourth day of drought stress for analysis. The main experimental factors used for analysis were treatment (control vs. stressed) and organ type. Date of planting was included as a random (block) factor. Relative water content We measured relative water content (RWC) of the fifth (penultimate) leaf, awn, lemma, and palea of control and stressed plants each day of the 4-day drought treatment. Each day, we harvested the four organs and immediately recorded their fresh weight. Next, we submerged each organ in 15 mL of distilled water in a 100 × 15 mm Petri dish and placed them in darkness for 24 h at 4°C. We want to point out that the tips of the leaves and the awns become progressively discolored as drought gets more severe. As a result, RWC was measured from the basal, green portion of the fifth leaf and awn. By the end of the 4-day treatment, about a quarter of the tip of the leaf and awn was discolored in the stressed plants and were not included in all measurements. The fifth leaf and awns were cut into~one cm segments to facilitate diffusion of water. The next day, we measured turgid weight after removing all traces of water on the surface of the samples using a Buchner funnel and gentle vacuum. Each organ was then dried at 70°C for 24 h and dry weights were measured. We calculated RWC from fresh, turgid, and dry weights using the equation: Osmotic potential We measured osmotic potential (Ψ s ) of the fifth leaf, awn, lemma, and palea of control and stressed plants on the fourth day of drought treatment. Organs were harvested, frozen in liquid nitrogen, and stored at −80°C prior to analysis. Each frozen sample was transferred to a 0.5 mL centrifuge tube with a hole in the bottom. The tube was placed into another 1.5 mL tube and centrifuged at 12,000 × g for 10 min. We used 10 μL of the sap to measure osmolality using a vapor pressure osmometer (Vapro® 5520, Wescor, Inc. Logan, Utah). Osmolality values were converted to osmotic potential using the formula: where Ψ s is osmotic potential in megapascals (MPa) and c is osmolality of the sap in mosmol kg −1 [62]. Osmotic adjustment Osmotic adjustment (OA) is the lowering of Ψ s due to net solute accumulation in response to water deficit. We measured OA of the fifth leaf, awn, lemma, and palea on the fourth day of drought stress according to the rehydration method [63][64][65]. In brief, we calculated OA for each organ as the difference between Ψ s of the control tissue at full turgor and Ψ s of stressed tissue at full turgor. Ψ s at full turgor was measured after rehydrating control and stressed samples in 15 mL of distilled water in a 100 × 15 mm Petri dish for 24 h in darkness at 4°C. All traces of surface water were removed from the samples using a Buchner funnel and gentle vacuum. The samples were frozen in liquid nitrogen and stored at −80°C until needed. We then thawed the samples, extracted the sap, and measured osmolality (see osmotic potential measurement above for methods) with a vapor pressure osmometer (Vapro® 5520, Wescor Inc., Logan, Utah). Gas exchange Each day of the drought treatment, we randomly selected one plant from three control and three stress treatment pots and measured photosynthesis (A) and stomatal conductance (g S ) of the fifth leaf and the awns using an open gas-exchange system (LI-6400, Li-COR Inc., Lincoln, NE). For the fifth leaf, we measured gas exchange at a controlled cuvette temperature of 22°C, a vapor pressure deficit of 1.5 -1.7 kPa, and a saturating irradiance of 2000 μmol m −2 s −1 . For the awns, we measured gas exchange using the needle gasket of the LI-6400. Measurements were made on two awns of the fourth spikelet (from the base of the inflorescence) under the same cuvette conditions as the fifth leaf except the vapor pressure deficit was set to~2.5 kPa. All measurements were made between 0900 and 1000 CST (3-4 h into the photoperiod). After recording the gas exchange measurements, leaves and awns were harvested to determine surface area. We measured leaf area using a digital caliper. The 3 cm region of the awn we used for gas exchange resembles a triangular prism with a 120°a ngle on the abaxial surface and 30°angles on each corner of the adaxial surface [66]. Therefore, awn area was calculated by measuring the width of the adaxial surface in imageJ (http://imagej.nih.gov/ij/) and calculating the width of the remaining sides using these angles. Metabolite extraction, derivatization, and analysis To analyze metabolites, we harvested the fifth leaf, awn, lemma, and palea from three-four plants per pot on the fourth day of drought treatment between 1100 and 1300 CST (5-7 h into the photoperiod). The three lowest and three highest spikelets on the spike were excluded from this analysis. We also removed 1 cm from the base and 2 cm from the tip of the awn (because of discoloration in the tip of stressed plants) and 2-3 cm from the tip of the leaf (because of senescence in stressed plants). The samples were frozen in liquid nitrogen and stored at −80°C. We ground the frozen samples in liquid nitrogen using a mortar and pestle and a 100 mg sub-sample was used for extraction and derivatization of polar metabolites according to Lisec et al., [67]. A solution of ribitol (60 μl of 20 μg/ml stock) was added as internal standard. The derivatized extract was dried under vacuum, dissolved in 200 μl chloroform, and transferred to a 300 μL GC vial. One μL of sample was injected into an Agilent 6890 GC instrument (Agilent, Santa Clara, CA) equipped with a Hewlett Packard 5973 MSD and a Restek Rtx®-5MS-Low-Bleed GC-MS Column. The instrument was set at 230°C, in split mode, with a split ratio of 16.5:1. The oven was set to an initial temperature of 80°C. After holding for 2 min, the temperature was increased at a rate of 9°C per min to a final temperature of 290°C. The system was held at 290°C for 6 min. Helium was used as the carrier gas and set to a flow rate of 1.2 mL/min. Gaseous compounds eluted from the GC were fed into an Agilent 5973 mass spectrometer (Agilent, Santa Clara, CA) and bombarded by an electron impact (EI) ionization source with an ionization energy of −70 eV at a temperature of 200-250°C for further separation based on mass-to-charge ratio. Ions were detected on a quadrupole mass selective detector. Acquired spectra were deconvoluted, quantified, and identified using AMDIS (Automated Mass Spectral Deconvolution and Identification System, http://chemdata.nist.gov/dokuwiki/doku.php?id=chemdata:amdis). Initially, we matched peaks to spectra from the National Institute of Standards and Technology (NIST) MS Search 2.0 mass spectral database. We used authentic targets and standard libraries to confirm peak identities in AMDIS. In addition to the RI (relative intensity) function in AMDIS, we converted the output from AMDIS to a spreadsheet and verified the RI manually. The integrated signal (after deconvolution) for ribitol was divided by the integrated signal for each metabolite within the injection to get relative amounts (response ratio). Statistical analysis We analyzed RWC, photosynthesis, and stomatal conductance data using repeated measures ANOVA with three factors: treatment (control vs. stress), organ type, and time (day of treatment). Treatment and organ were between-subject factors and time was the repeated measures factor (within-subject factor). Variation between pots (nested within treatment) was included as a random factor. This analysis is represented by the linear model: where y ijkl is the response at treatment level i, in organ j, at time k, and in pot l; μ is the mean of each treatment combination, pot l(i) is experimental error due to the effect of pot l receiving treatment i, and ε ijk is sampling error due to variation among plants within pots. The model assumes there is no time × pot interaction. Treatment, organ type, time, and their interactions are fixed effects and pot l(i) and ε ijk are random effects. ANOVAs with repeated measures are particularly susceptible to violating the assumption of sphericity, the condition where differences between pairs of repeated measures factors have equal variance and equal covariance. We tested four covariance structures to assess correlations between levels of the repeated measures factor (time): compound symmetric (CS), autoregressive order one (AR(1)), Huynh-Feldt (HF), and unstructured (UN). AR(1), HF, and UN failed to converge so significance tests were performed based on CS. For interaction effects, we used Tukey's pairwise comparison to determine differences between pairs of treatment × organ combinations at each time point. To determine differences in Ψ s and metabolite accumulation in response to drought, we used the linear model: where y ijk is the response at treatment level i , organ j, and block k, μ is the overall mean and ε ij is the deviation for ij th subject. In this model there is no treatment × block interaction and variance from block to block is assumed to be constant. We then used Tukey's pairwise comparison to further examine the treatment effect in each organ. We tested the assumptions of normality and homoscedasticity (equal variance) in ANOVA using PROC UNI-VARIATE and Levene's test with option TYPE BF in PROC GLM. These tests revealed that RWC and gas exchange data were normally distributed with homogeneous variance. Accordingly, we performed the repeated measures ANOVA on untransformed data using the RE-PEATED statement in PROC MIXED. Osmotic potential and the metabolite data were neither normally distributed nor of constant variance. We corrected non-normality and heterogeneous variance using the Box-Cox power transformation. This transformation improved variability in the data but a few metabolites were still heterogeneous. Because ANOVA is robust to non-normal and heteroscedastic data, we tested mean differences in PROC MIXED using the transformed osmotic potential and metabolite data. All statistical analyses were performed in SAS v. 9.4 (SAS Institute Inc., Cary, NC). Water status of the fifth leaf and spike organs during drought Relative water content (RWC) differed significantly between treatments (control vs. stressed plants), organs, time (day of treatment), and their interactions (Additional file 1: Table S1). In control plants, RWC did not vary between days in any organ during the treatment period (Fig. 1). Average RWC was highest in the fifth leaf (96 %), followed by the awn (85 %), lemma (83 %), and palea (74 %). In stressed plants, RWC declined progressively during the treatment period in every organ (Fig. 1). In stressed fifth leaves, RWC decreased from 94 to 49 % (Fig. 1a), which was the largest loss of water in any organ. By the fourth day of treatment, the leaves of stressed plants were severely wilted. In stressed awns, RWC decreased from 85 to 66 % (Fig. 1b), which was the smallest loss in RWC of any organ. In stressed lemmas, RWC decreased from 83 to 58 % (Fig. 1c) and in stressed paleas, RWC decreased from 77 to 58 % (Fig. 1d). Osmotic potential (Ψ s ) differed significantly between treatments, organs, and their interaction (Additional file 1: Table S1). In control plants, Ψ s was lowest in the fifth leaf (−1.65 MPa) followed by the palea (−1.53 MPa), awn (−1.46 MPa), and lemma (−1.3 MPa, Fig. 2). Drought significantly reduced Ψ s in every organ (Fig. 2). After the 4-day drought treatment, Ψ s had dropped to −3.3 MPa in the fifth leaf and awn, −3.86 MPa in the lemma, and −4.2 MPa in the palea. All three spike organs, showed evidence of osmotic adjustment (range = 0.30 -0.36 MPa; Table 1), which is an indicator of ability to maintain cellular water during drought. There was no evidence of osmotic adjustment in the fifth leaf (Table 1) and on the fourth day of drought it showed severe wilting. Gas exchange in the fifth leaf and awn during drought Photosynthetic rate (A) and stomatal conductance (g s ) differed significantly between treatments, organs, and Table S1). We detected significant treatment × organ, time × organ, and time × treatment terms for g s and significant time × organ, and time × treatment terms for A (Additional file 1: Table S1). We also detected a significant time × treatment × organ interaction for g s but not A (Additional file 1: Table S1). In both the awn and fifth leaf, A and g s remained stable in control plants throughout the treatment period (Fig. 3). In stressed fifth leaves, A and g s declined significantly on the second day of drought treatment and remained low there after (Fig. 3a, c). In stressed awns, by contrast, A and g s did not decline significantly until the third day of the drought treatment (Fig. 3b, d). Metabolic changes in the fifth leaf and spike organs during drought We identified 18 metabolites but only ten metabolites accumulated significantly during drought in one or more organs (Fig. 4, Additional file 1: Table S2): six amino acids (Fig. 4a-f ), three sugars (Fig. 4g-i), and one organic acid (Fig. 4j). Although there was no evidence of osmotic adjustment in the fifth leaf, it accumulated six metabolites during drought. The awn, lemma, and palea accumulated seven, six, and two metabolites during drought, respectively (Fig. 4). Metabolites representing five different families of amino acids accumulated in the photosynthetic organs during drought: serine (glycine), branched-chain (valine and isoleucine), aspartate (threonine), glutamine (proline), and aromatic amino acids (phenylalanine; Fig. 4). Proline was the only amino acid that accumulated in all organs during drought (Fig. 4f ). Valine accumulated in the fifth leaf, awn, and lemma during drought (Fig. 4b). Glycine, isoleucine and threonine accumulated in the fifth leaf and awn during drought (Fig. 4a, c, d). Phenylalanine accumulated in the fifth leaf and lemma during drought (Fig. 4e). Sugars only accumulated in the spike organs during drought. Fructose accumulated in the awn (Fig. 4g), glucose accumulated in all three spike organs (Fig. 4h), and sucrose accumulated in the lemma (Fig. 4i) during drought. For the organic acids, malic acid accumulated in the lemma during drought (Fig. 4j). Discussion The spike (ear) of cereals consists of photosynthetic organs that are important sources of assimilate for grain filling but their response to drought stress is poorly understood. The few previous studies that examined drought response in cereal spikes either focused solely on the awn or on the entire spike as a collective unit [8-10, 50, 53, 54, 57, 58, 68]. Our goal in this study was to characterize the drought response of individual spike organs (awn, lemma, and palea) in barley during the early stage of grain filling and to compare those responses with that of a vegetative organ (i.e., the fifth leaf ). We found that these four organs displayed contrasting responses to drought, as indicated by differences in: (1) relative water content (RWC); (2) osmotic potential (Ψ s ); (3) extent of osmotic adjustment (OA); (4) rates of gas exchange in the awn and fifth leaf; and (5) accumulation of metabolites. Our results suggest that the spike organs are more drought resistant than the fifth leaf, and, among the spike organs, the lemma and palea are more drought resistant than the awn. The water status of the fifth leaf and spike organs during drought The combination of RWC and Ψ s indicates whether plants maintain good hydration during drought through OA. RWC decreased progressively over the four-day drought period in all four organs but the rate of decline in the awn, lemma, and palea was more moderate than that of the fifth leaf (Fig. 1). Similarly, drought reduced Ψ s in all four organs but the difference in Ψ s between control and drought treatments was smallest in the fifth Fig. 2 Changes in osmotic potential in the fifth leaf and spike organs of barley during drought. Osmotic potential was measured on the fourth day of drought treatment during grain filling. Significant differences between organs are indicated with lower case letters (stressed plants) and upper-case letters (control plants). Within a given organ, significant differences between treatments (control vs. drought) are indicated with asterisks, where * = P < 0.05, ** = P < 0.01, and *** = P < 0.001. Data are presented as the mean of six replicates ± SE. We used SigmaPlot 10.0 to make the figure leaf. Further, Ψ s was significantly higher (less negative) in the stressed fifth leaf than the stressed palea (Fig. 2). Consistent with these differences in RWC and Ψ s , we found that the lemma, palea, and awn adjusted osmotically to drought and the fifth leaf did not (Table 1, Additional file 1: Table S2). The lack of OA in the fifth leaf suggests that osmolyte accumulation in this organ (Fig. 4) may be due to passive water loss from the cytoplasm during drought. Alternatively, this result may suggest that the 4-day drought treatment caused cellular injury in the fifth leaf. Indeed, osmolyte accumulation is a common symptom of drought-induced cellular damage [69]. Among the spike organs, the awn, lemma, and palea had similar losses in RWC (Fig. 1) and displayed comparable OA (Table 1). The awn had higher (less negative) Ψ s than the lemma and palea during drought; however, this difference was only significant between the awn and palea. Therefore, our results suggest that the spike organs maintain more cellular hydration than the fifth leaf during drought and, to a lesser extent, the lemma and palea maintain more water than the awn. The fifth leaf and awn exhibit different gas exchange responses during drought In addition to their differences in RWC, Ψ s, and OA, the awn and fifth leaf had different rates of gas exchange during the drought treatment. The major difference was the time it took for photosynthesis (A) and stomatal conductance (g s ) to decline following the stress. In the fifth leaf, A and g s sharply decreased on the second day of drought, whereas in the awn, these processes did not show significant decline until the third day of stress (Fig. 3). This suggests that, compared to the awn, the leaf contributes very little assimilate for grain filling during drought. The rapid shut-down of gas exchange in the fifth leaf could be related to its lack of OA (Table 1), which would limit its ability to maintain turgor pressure in the guard cells [70]. Alternatively, drought may have inhibited gas exchange in the fifth leaf at the biochemical level [71]. However, it must be pointed out that gas exchange was not sustained in the awns indefinitely as both organs had comparably low rates of A and g s on day four of the drought treatment (Fig. 3). The decline in gas exchange in the awn was not because of a lack of OA (Table 1) but rather, was most likely caused by drought-induced inhibition of the photosynthetic metabolism [71]. This interpretation is supported by our previous transcriptome study, which showed down-regulation of photosynthetic genes in the awn of Morex barley on the fourth day of drought [59]. It is worth noting that the high number of awns in the barley spike increases the surface area for photosynthesis [50,72] and the total assimilate contributed by the awns could still be higher than that of the fifth leaf even on the third or fourth day of drought stress. We did not measure gas exchange in the lemma or palea because of the challenges associated with accurately measuring this process on these organs. However, our RWC, Ψ s , and OA data suggest that these organs are more drought resistant than the awn. Further, we previously showed that the lemma and palea express fewer genes than the awn during drought [59]. Taken Within an organ, significant differences between treatments (control vs. drought) are indicated with asterisks, where * = P < 0.05, ** = P < 0.01, and *** = P < 0.001. Data are presented as the mean of six replicates ± SE. We used SigmaPlot 10.0 (Systat Software Inc., San Jose, CA) to make the figures together, these evidences suggest that the lemma and palea might maintain higher rates of gas exchange during drought than the fifth leaf or even the awn. Proper measurement of gas exchange in the lemma and palea is needed to test this hypothesis. The fifth leaf and spike organs accumulate different metabolites during drought Suppression of photosynthesis by abiotic stress leads to accumulation of reactive oxygen species (ROS) [73][74][75][76]. ROS can destroy nucleic acids, proteins, carbohydrates, and lipids [77]. Drought-induced stomatal closure restricts uptake of CO 2 and the use of NADPH and ATP in the Calvin cycle, favoring the production of singlet oxygen, superoxide, and H 2 O 2 in the photosynthetic electron transport chain. Disruption of photosynthesis also increases production of H 2 O 2 during photorespiration in the peroxisome and the mitochondrial electron transport chain [74,78,79]. In addition to their role as osmolytes for turgor maintenance, metabolite accumulation can detoxify ROS and stabilize subcellular structures in drought-stressed tissues. We detected significant accumulation of ten metabolites in the photosynthetic organs of barley following the 4-day drought treatment (Fig. 4, Additional file 1: Table S2). Metabolite accumulation in the barley cultivar we used (Giza 132) is consistent with other studies [80][81][82][83][84][85][86][87]. Previous studies have shown that the types of osmolytes that accumulate during drought are generally species-specific [81,84,86,88]. Our results expand on this conclusion by showing that osmolyte accumulation during drought is organ-specific in barley (Fig. 4). Accumulation of amino acids during drought is due to active synthesis, inhibition of their degradation, and/or break down of proteins [89][90][91]. Proline was the only metabolite that accumulated in all four photosynthetic organs during drought (Fig. 4) suggesting that this amino acid plays an important role in the overall drought response of barley. This result is consistent with other studies that detected accumulation of proline in response to drought [83,92,93]. Proline serves as an energy source, a stress-related signal [93,94], and as an osmolyte for turgor maintenance and protection of cellular functions through ROS scavenging and stabilization of subcellular structures [95]. In the cytosol and chloroplasts, proline is synthesized from glutamate by pyrroline-5-carboxylate synthetase (P5CS) and pyrroline-5-carboxylate reductase (P5CR). In the mitochondria, proline is synthesized from arginine catalyzed by arginase and ornithine aminotransferase (OAT) [69]. Proline is degraded to glutamate in the mitochondria by proline dehydrogenase (PDH) and pyrroline-5-carboxylate dehydrogenase (P5CDH) [69]. P5CS is up-regulated during drought [96] and PDH is down-regulated [97,98], promoting proline accumulation. P5CR, arginase, and OAT are up-regulated in the awn, lemma, and palea of barley during drought [59] and these enzymes may be the major players of proline accumulation in the spike. Five other amino acids (glycine, valine, isoleucine, threonine, and phenylalanine) accumulated in the fifth leaf and variably in the spike organs during drought (Fig. 4). The last step in the biosynthesis of the branched-chain amino acids valine and isoleucine is catalyzed by the enzyme branched-chain aminotransferase (BCAT). This enzyme is also involved in the initial steps of isoleucine catabolism. BCAT maintains the concentration of the branched-chain amino acids below toxic levels by controlling their synthesis and degradation [99]. The BCAT gene is inducible by drought [59,99] and ABA [100]. Threonine (aspartate family) is the substrate for isoleucine biosynthesis. Increased threonine concentration in the fifth leaf and awn (Fig. 4) might also have contributed to the accumulation of isoleucine during drought. The aromatic amino acid phenylalanine accumulated in the fifth leaf and lemma (Fig. 4). Aromatic amino acids are synthesized via the shikimate pathway and serve as precursors for several secondary metabolites. The accumulation of phenylalanine in the lemma is inconsistent with our previous transcriptome analysis, which showed no change in expression of aromatic amino acid biosynthesis genes in the lemma and downregulation in the awn during drought [59]. Nevertheless, phenylalanine accumulation in the fifth leaf is consistent with reports in other species, such as maize, during drought [101]. Sugars are important sources of carbon and energy [102]. They also serve as signal molecules [2,[103][104][105] and osmolytes [102,106,107]. We detected accumulation of glucose in all three organs of the spike, suggesting it plays an important role in the overall drought response of the spike. Fructose accumulated only in the awn and sucrose accumulated only in the lemma during drought (Fig. 4). Accumulation of different sugars may suggest that each spike organ uses different osmolytes for drought resistance. Nevertheless, accumulation of sugars in the barley variety we used is consistent with accumulation in other species during drought [85,108,109]. The organic acid malate is an intermediate in the citric acid (tricarboxylic acid) cycle, the glyoxylate cycle, and photosynthesis (C 4 and Crassulacean acid metabolism, CAM). Malate plays a central role in plant metabolism and homeostasis, including providing a carbon skeleton for amino acid biosynthesis, as an osmolyte, regulation of pH homeostasis, as a root exudate during phosphorus deficiency, and as a reducing equivalent shuttled between subcellular compartments [110][111][112][113]. In our study, malate accumulated only in the stressed lemma (Fig. 4) and this agrees with accumulation in maize [114] and wheat [85] during drought. Accumulation of malate is consistent with up-regulation of MDH (malate dehydrogenase) in the lemma during drought [59,115]. MDH catalyzes the interconversion of malate and oxaloacetate and accumulation of malate may suggest that MDH predominantly catalyzes the conversion of oxaloacetate to malate in the lemma during drought. Conclusions In this study, we showed that the spike organs (lemma, palea, and awn) and vegetative organs (fifth leaf ) of barley respond differently to drought at the grain filling stage. Based on differences in RWC, Ψ s , extent of OA, gas exchange, and metabolite accumulation, we conclude that the spike organs of barley maintain more cellular hydration than the fifth leaf, and, to a lesser extent, the lemma and palea retain more water than the awn during drought. We propose that the spike organs employ two strategies for coping with drought: drought avoidance via osmotic adjustment and drought tolerance through ROS scavenging and stabilization of macromolecules. Additional file Additional file 1: Table S1. ANOVA results for the effects of treatment, organ, time, and their interactions on RWC and gas exchange. Table S2. ANOVA results for the effect of treatment, organ, and their interaction on ten metabolites. (DOCX 18 kb) Abbreviations A: Photosynthesis; g s : Stomatal conductance; MPa: Megapascal; OA: Osmotic adjustment; RWC: Relative water content; Ψ s : Osmotic potential
2017-08-03T01:39:54.221Z
2016-11-09T00:00:00.000
{ "year": 2016, "sha1": "d7ffc9a2a7e011bc9095ef58b853c8a661c88dcd", "oa_license": "CCBY", "oa_url": "https://bmcplantbiol.biomedcentral.com/track/pdf/10.1186/s12870-016-0922-1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7591a3d4a99e32e714f026bf4ad74e65407c84d0", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
49301437
pes2o/s2orc
v3-fos-license
Traceable Ciphertext-Policy Attribute-Based Encryption with Verifiable Outsourced Decryption in eHealth Cloud 1 School of Computer Science, Nanjing University of Posts and Telecommunications, Nanjing 210023, China 2Jiangsu Key Laboratory of Big Data Security & Intelligent Processing, Nanjing University of Posts and Telecommunications, Nanjing 210023, China 3Jiangsu Innovative Coordination Center of Internet of Things, Nanjing University of Posts and Telecommunications, Nanjing 210003, China 4School of Computer Science and Technology, Anhui University, Hefei 230601, China 5School of Computer Science and Technology, Xidian University, Xian 710071, China Introduction Electronic health care (eHealth) system is regarded as an outstanding approach to provide well health care service through various emerging technologies, including Internet of Things, cloud computing, mobile computing, and wireless sensor networks.In cloud-assisted eHealth systems, an individual patient integrates his/her personal health information (PHI) collected via various wearable and embedded sensors, stores the PHI in the cloud, and receives real-time and highquality medical treatment.Unfortunately, when the patient enjoys convenient storage services provided by cloud server, the risk of privacy exposure also raises.The sensitive PHI may be exposed to the cloud server which can not be fully trusted.Even worse, the PHI may be widely propagated to unauthorized parties for commerce benefit or other purposes.Thus, the PHI must be encrypted before hosted to the eHealth cloud.Meanwhile, an access policy must be specified to point out who are authorized to access the PHI. Aiming to realize access control on encrypted message, attribute-based encryption (ABE) [1] was presented to provide an efficient solution to this kind of applications.According to the place where the access policy is embedded, the ABE schemes are divided into two forms, key-policy ABE (KP-ABE) [2] and another type of ABE named ciphertext-policy ABE (CP-ABE) [3].In the former framework, every user's key is labeled with an access policy while the ciphertexts are annotated with chosen sets of attributes.On the contrary, the 2 Wireless Communications and Mobile Computing user's key in CP-ABE is issued according to his/her attributes while the ciphertext is encrypted under an access policy.Since that ABE is a feasible mechanism which preserves the security and privacy of patients' PHI, a series of attribute-based access control systems [4][5][6][7][8] have been proposed, aiming at expressive policies, security, or efficiency.In particular, there remain two significant features to be considered in utilizing ABE technique in eHealth systems. The first feature is verifiability of outsourced decryption.In most ABE systems [1][2][3][9][10][11][12], the decryption overhead is linear to the scale of involved attributes and expensive for energy-constrained terminals.The decryption outsourcing technique [13] is proposed to reduce the number of exponential operations and bilinear pairing operations on user side by offloading the heavy decryption computation to a third-party server, e.g., the cloud server.The user then recovers the plaintext by executing only one exponential operation over ElGamal-style partial decrypted ciphertext element generated by the third-party server.However, such outsourced scheme can not guarantee the correctness of returned ElGamal-style element.Lai et al. [14] presented the verifiable approach in ABE to check whether the thirdparty server has honestly executed the decryption service.They also bring redundant overhead in both encryption computation and ciphertext size.Qin et al. [15] provided an efficient verifiable ABE scheme which significantly reduces the computation cost in encryption and the decryption overhead for users. Another considerable feature is traceability.We take CP-ABE as an instance; the private key is generated from some descriptive attributes rather than from a unique identity.Each attribute may be possessed by multiple users.It could be impossible to distinguish who is the original owner of a given private key.Imagine two physicians in eHealth systems, Tomas and Jack.They have the attribute set '{orthopedics department, chief physician}' which is not possessed by any other users.By the key delegate technique [3], both Tomas and Jack can regenerate a private key responding to the set '{orthopedics department, chief physician}' , if there is a third user who can decrypt the ciphertext labeled by access policy '{ 'orthopedics department' AND 'chief physician' }' .Where did the key come from?Tomas or Jack?To solve the problem above, Liu et al. [16] extended an adaptively secure CP-ABE scheme [9] to support 'white-box' traceability, where the malicious user directly leaks his/her private key.Subsequently, Ning et al. [17] constructed a large attribute universe and traceable CP-ABE scheme.On the contrary to the 'small universe' in [3,10,[14][15][16]], the 'large universe' means that the scale of attribute universe is unbounded [18]. However, existing works mostly aimed to support the property of verifiable outsourced decryption or traceability separately.There is no CP-ABE scheme with both verifiable outsourced decryption and white-box traceability in practice: (1) the CP-ABE schemes [16,17] support the traceability well, but the user's decryption cost grow with the attribute number; (2) these CP-ABE schemes [14,15,19,20] provide decryption assistance for users, and the correctness of returned PDC element is guaranteed; however, the traceability property is not addressed. In this work, we propose a novel verifiable and traceable CP-ABE scheme named VTCP-ABE for eHealth cloud applications.The VTCP-ABE scheme is the first scheme which simultaneously achieves white-box traceability and verifiable outsourced decryption without exposing the physician's identity information.Since we take the 'large universe' scheme [18] as the basis, the attribute universe in our scheme is inherently unbounded.We further extend the VTCP-ABE to support another delegation property.We also provide the formal proof of the selective CPA security, verifiability, and traceability for VTCP-ABE.The comparison and simulation results show that our VTCP-ABE is applicable for practical eHealth cloud applications.In particularly, we make the following contributions: (1) We propose a new VTCP-ABE scheme which simultaneously achieves the properties of verifiable outsourced decryption, white-box traceability, and large universe.An authorized physician can check the correctness of partial decrypted ciphertext (PDC) which is requested from the eHealth CDS.Given a private key, the original owner can be precisely tracked.The attribute universe can be exponentially large and the number of public parameter elements is constant no matter how many attributes are chosen. (2) We present an efficient approach to prevent the CDS from knowing the fixed identification information of physician during offering decryption service.The original ciphertext and the transmission private key will be preprocessed before being sent to the CDS.This method is acceptable since only two additional exponential operations for each decryption request are added. (3) We exploit an additional property of delegation for our VTCP-ABE, with which a resource-constrained physician can delegate someone to obtain a PDC element without compromising the privacy of PHI. Green et al. [13] constructed the first decryption outsourcing ABE, where the most decryption overhead is hosted to a third party.With the returned partial decrypted ciphertext, a user could recover the plaintext message by executing only one exponential operation.Based on the outsourced method [13], Li et al. [7] presented a PHR data sharing scheme for cloud storage applications in the multi-authority settings.In both [7,13], the correctness of returned PDC is not guaranteed.Lai et al. [14] presented an approach to check whether the partial decrypted ciphertext element (transformed ciphertext element) is correctly calculated.Their technique incurred noticeable overhead in both decryption and encryption.Based on key encapsulation mechanism, Lin et al. [19] and Qin et al. [15] separately proposed a fascinating method to support verifiable outsourced decryption in ABE.The difference between [19] and [15] is that, in [19], the hash value of a random group element is set as the symmetric key to encrypt the original data, then is encrypted by a ABE scheme to obtain a ABE-type ciphertext, which will be used to generate the verification key.In [15], the original data is encrypted along with a randomly chosen bit string , while the verification key is set by executing exponential operations in the group by taking the hash values of and as exponents. Liu et al. presented the first adaptively secure and whitebox traceable CP-ABE scheme in [16], where any monotonic LSSS access structure is supported.They further constructed another CP-ABE scheme with black-box traceability in [30].Based on the scheme [31], Ning et al. [17] exploited the whitebox traceability for CP-ABE in large universe settings.From then on, many traceable ABE constructions are proposed in [6,32,33].However, in these traceable schemes [6,16,17,30,32,33], the decryption overhead grows with the scale of attribute set adopted in decryption. Table 1 compares the characteristics between some related works and our VTCP-ABE.From Table 1, our VTCP-ABE scheme is the only practical scheme to simultaneously support the properties of large universe, verifiable outsourced decryption, white-box traceability, and delegation in CP-ABE. Linear Secret Sharing Schemes (LSSS) Definition 1. Linear Secret Sharing Schemes [21,34]: let P denote a set of attributes, and let be a chosen prime.Let ∈ Z × be a matrix.For all = 1, . . ., , a function labels the -th row of with an attribute (i.e., ∈ F([] → P)).A secret sharing scheme Π over the attribute universe P is linear if one has the following: (1) The shares for each attribute make a vector over Z . (2) In order to generate the shares of a secret ∈ Z , we select the column vector → = (, 2 , . . ., ) ⊤ , where 2 , . . ., are randomly selected from Z , then → is the shares of according to Π.The share ( → ) belongs to the attribute (). As demonstrated in [34], the linear reconstruction property of LSSS is defined as follows: Suppose (, ) is the access structure T and is an authorized set.Let = { : () ∈ } be the index set of rows which are linked with the attributes in .There exist constants { ∈ Z } ∈ which satisfy that if { = ( → ) } are valid, then we have ∑ ∈ = . 𝜑-Type Assumption.The security of VTCP-ABE is reduced to a -type assumption [18].Suppose G is a cyclic group and prime is the group order.Randomly pick ∈ G and choose , , 1 , 2 , . . ., ∈ Z .If an adversary A is given the group description (, G, G 1 , ) and Ξ including all of the following terms: It must be hard for A to distinguish the element (, ) The advantage of an algorithm A which solves the above -type problem is Definition 2. We claim that the -type assumption holds if the advantage of all polynomial time adversaries is negligible in the above -type game. 𝜗-Strong Diffie-Hellman Assumption (𝜗-SDH). The -SDH problem [35,36]: suppose G is a cyclic group.Let prime be the group order. is randomly selected from G. Given a ( + 1)-tuple (, , 2 , . . ., ), output a pair (, 1/(+) ) ∈ Z × G.An algorithm A has advantage in solving -SDH problem if Pr[A(, , 2 , . . ., ) = (, 1/(+) )] ≥ , where the probability is over the random bits consumed by A and the randomness of ∈ Z .Definition 3. We claim that the (, , )-SDH assumption holds if the advantage of all -time adversaries is at most in solving the above -SDH problem.The authority: the authority produces the system parameters and generates private keys for the legal physicians depending on their attributes.It is also in charge of tracing the malicious physicians. System Architecture and Security Model The patient: with the help of IOT techniques, the patient integrates and then encrypts his/her PHI under appropriate access policy and further uploads the ciphertext to the eHealth cloud storage server. The eHealth cloud storage server (CSS): the eHealth CSS provides storage service for the patient.If necessary, the patient can call CSS to delete his/her PHI data. The eHealth cloud decryption server (CDS): the eHealth CDS provides pre-decryption service of the encrypted PHI and returns the partial decrypted ciphertext to the authorized physician. The physician: the physician takes responsibility of medical treatment for the patient whose access policy accepts his/her attributes.The physician is also enabled to check the correctness of returned pre-decryption results from the CDS.The malicious physician may leak his private key for economic benefit or some malignant purpose. We note that the eHealth CSS and CDS are assumed to be semi-trusted as in [22].That is, the CSS and CDS honestly execute the pre-set algorithms.But they attempts to get useful information of the encrypted PHI as much as possible.In addition, the eHealth CDS may want to obtain the identification information of physician. As one of the important applications in IOT environments, the eHealth cloud system enables the patient to collect his PHI via wearable devices, physiologic sensor nodes and body area networks, etc.Before uploading the PHI to the cloud sever to get real-time health care services, the patient can define expressive access policy of his PHI over descriptive attributes by VTCP-ABE.According to the assigned attributes, the individual physicians have differential flexible access rights.They can provide various (free or paid) health care services by smart devices on condition that their attributes match the access policy of patient's PHI.Our VTCP-ABE also offers the traceability to prevent the key abuse problem and the verifiable outsourced decryption technique to offload most decryption cost to the cloud server and guarantee the returned results. Definition of VTCP-ABE. Our VTCP-ABE is comprised by the following seven algorithms. Setup(, ) → (, ): this algorithm takes in a security parameter and the system attribute universe .It then outputs the system public parameters and the master secret key .Besides, it initializes an identity table = ⌀. Encrypt (, , T) → (, ).This algorithm takes in a message , , and an access structure T. It then outputs a ciphertext and a verification key . KeyGen (, , , ) → (, ).This algorithm takes in , , an identity information and an attribute set .It then outputs a transmission private key and a user decryption key . Pre-Process (, , ) → (, ).This algorithm takes in and .It then outputs a pre-processed ciphertext and a pre-processed private key . Trace (, , , ) → or ⊤.This algorithm takes in , , , and .It first verifies whether and are well-formed.If so, this algorithm outputs the annotated with and .Otherwise, it outputs ⊤ implying that and are not required to be traced.If and can pass a "key sanity check" which means that they can be used in the normal decryption phase, they are called well-formed [16]. CPA Security Model. Similar to [17,18], the definition of selective security model of VTCP-ABE against chosen plaintext attack (CPA) is given as follows: Init.The adversary A gives the simulator B the challenge access policy T * . Setup Definition 4. We claim that a VTCP-ABE scheme is selectively CPA secure if the advantage is negligible for all PPT adversaries in the above selective security game. Security Game for Verifiability. Based on the replayable chosen ciphertext attack (RCCA) security model [13,15], we briefly introduce the verifiability game as follows. Setup.The challenger B generates (, ) and sends to the attacker A. Phase 1. B queries the results from the , , and oracles as in [15]. Challenge Phase. The Proposed VTCP-ABE In this section, we first briefly introduce the techniques of constructing a verifiable and traceable CP-ABE scheme and then give the details of VTCP-ABE construction. Technical Overview. To achieve the traceability in [17], each private key is associated with a unique fixed number so that the key owner cannot re-randomize his own private key to get a completely new key.In the verifiable CP-ABE scheme with outsourced decryption [15], the private key is composed of a transmission key and a user decryption key.The transmission key is sent to a third party to get the partial decryption result and the user decryption key is used to decrypt the partial decryption result and check its correctness. Our goal is to achieve the efficient user decryption and traceability without compromising the security and privacy.However, if we combine the traceable CP-ABE [17] and the verifiable outsourced decryption approach [15] in a naive way, the fixed identifier number will be exposed to the eHealth CDS.Even worse, the CDS may use and the transmission private key to fabricate a key which could pass the check in the traceable algorithm of [17].That is, a legal physician may be framed to be malicious and further revoked from the system.To prevent the CDS from knowing , we process the transmission private key and original ciphertext before submitting them to the eHealth CDS.Meanwhile, we add the user decryption key as input of the traceable algorithm.Finally, we add the property of verifiable outsourced decryption into the traceable CP-ABE scheme [17] at a very low cost on the physician side (one additional element in private key, two additional exponential operations in pre-processing) 4.2.Detailed Construction.We now give the detailed construction of the VTCP-ABE. Setup.Given a group description = (, G, G 1 , ), where prime order is the order of groups (G, G 1 ) and denotes a map : G × G → G 1 .The system attribute universe is set as = Z .Then randomly pick , , ℏ, , ] ∈ G and , ∈ Z . After that, this algorithm sets 1 = 1 () and computes a symmetric key = 3 ().Then it calls to create a ciphertext = SE-Encrypt(, ) and the verification key Finally, the ciphertext of PHI data = ( , ) is uploaded to the eHealth CSS as well as . Pre-Process.The physician can request the PHI ciphertext = ( , ) and from the eHealth CSS, which will response by the elements 1 , 2 , , and while the other elements will be sent to the eHealth CDS. Security Proof 5.1.CPA Security.For simplicity, the security of the presented VTCP-ABE scheme is reduced to that of the traceable scheme [17] which is proved under the -type assumption.We let ∑ − and ∑ − denote the traceable scheme [17] and our VTCP-ABE scheme, respectively. Proof.Similar to the proof in [15], we define a series of hybrid argument of games as in [37]. Game 0 .Identical to the original security game as defined in Section 3.3.Proof.Suppose that an attacker A can distinguish Game 0 from Game 1 , then we can build a PPT algorithm B to break ∑ − . Init.The attacker A submits the challenge access policy T * to B. B then sends T * to ∑ − . B randomly picks ∈ Z and sets = ( K) 1/ = /(+) , B implicitly sets = ( β) 1/ and = ( β ) According to the analysis in [15] and Lemma 10.Suppose that is a semantically secure symmetric encryption scheme, then the attacker can not win Game 2 with a non-negligible advantage. Proof.In Game 2 , * ∈ {0, 1} ℓ is a truly random symmetric key.An algorithm B can be directly constructed from A to break the semantic security of * .Therefore, we have Remark that Game 0 is identical to the selective security game for our proposed VTCP-ABE scheme.The advantage is | Pr[ 0 ]−1/2|.Thus, the security of our ∑ − follows. Verifiability Theorem 11.Suppose that these two hash functions 1 and 2 are collision-resistant, our proposed VTCP-ABE scheme is privately verifiable. Proof.Suppose that an attacker A can win the verifiability game, we can employ A to build an algorithm B to break the collision-resistance of 1 and 2 . Given the challenge hash functions * 1 and * 2 , B processes as follows. B runs Setup to generate and , except for * Wireless Communications and Mobile Computing A outputs an attribute set * which satisfies T * and a partially decrypted ciphertext * = and . If A wins the verifiability game, B will get a message ∉ { * , ⊥}.Note that the Decrypt algorithm outputs where 1 = * 1 () and is recovered from * and . We now analyze the success probability of A by considering the following cases: (1) Thus, B gets a collision of * 1 . Traceability Theorem 12.If the -SDH assumption holds, then our proposed VTCP-ABE scheme is fully traceable on condition that < , where is the number of key queries made by the attacker A. Setup.Key Query.B answers the -th query of ( , ) as follows. If Ψ A does not happen, B randomly picks ( , ) ∈ Z × G as the solution. As analyzed in [17], B's advantage is non-negligible in solving the -SDH problem. Performance Comparison We here compare the performance of the VTCP-ABE scheme with the TCP-ABE scheme [17] and the VCP-ABE scheme [15] in the setting of key encapsulation, where the PHI data is encrypted by a symmetric encryption key which will be encrypted under an access policy in ABE. 6.1.Numeric Result.Tables 2 and 3 show the numeric comparison between our scheme and other two schemes [15,17].Let , , and 1 be the overhead in executing a bilinear pairing, an exponential operation in G and G 1 , respectively. denotes the system attribute universe. , , and refer to the set of attributes used in encryption, key generation, and decryption, respectively.Let ℓ 2 be the output length of 2 . In Table 2, we calculate the computation cost incurred in the following phases: encryption, key generation, predecryption, and user decryption.The user in VCP-ABE and our VTCP-ABE expends constant size computation cost of exponential operation in G 1 .Note that our VTCP-ABE requires two additional exponential operations in the user side since that the ciphertext and transmission key need to be processed before being transmitted to the eHealth CDS. In Table 3, the length of system public parameter, private key, and ciphertext is calculated by the number of group elements.The VCP-ABE scheme requires more public parameters which are linear with the scale of system attribute universe due to the fact that all the possible attributes need to be listed during the system initialization phase.Compared with the non-outsourced TCP-ABE scheme, our VTCP-ABE requires an additional element as the user decryption key and an output of 2 as the verification key.6.2.Implementation.We implement VCP-ABE scheme [15], TCP-ABE scheme [17], and the proposed VTCP-ABE on a windows 7 platform of an Intel(R) Core(TM) i5-3450 CPU at 3.10 GHz with 8.00 GB Memory.A Type A elliptic curve group is chosen from the JPBC library [38] and the order is a 512-bit prime.We mainly count the computation cost incurred by ABE relevant operations.The computation time of each algorithm is the average of 20 trials. Figure 2 illustrates the computation cost comparison among VCP-ABE scheme, TCP-ABE scheme, and our proposed VTCP-ABE scheme. shows the computation time in the initialization phase.In the three schemes, the computation cost is mainly incurred by computing the parameters (, ) and . Figures 2(b) and 2(c) show the computation time in the key generation phase and the encryption phase, respectively.It is observed that the key generation cost and encryption overhead in three schemes are linearly with the number of used attributes.More precisely, TCP-ABE and ours require more computation operation than VCP-ABE since that the combination of parameters and ℏ is employed to indicate an attribute. Figure 2(d) shows the computation cost in the pre-process phase of our VTCP-ABE.Two exponential and multiplicative operations in group G are required in computing 3 and 3 no matter how many attributes are involved. Figure 2(e) illustrates the computation cost comparison in the user decryption phase among three schemes.We can find that the user decryption cost in TCP-ABE scheme increases with the number of attributes.Thanks to the efficient outsourced decryption approach, the final decryption costs on the user side in VCP-ABE scheme and ours are significantly lower than that in TCP-ABE and independent of the attribute number. Figure 2(f) gives the computation cost comparison in tracing the malicious users between TCP-ABE and ours.We can observe that the computation cost in both scheme grows with the number of attributes and our scheme only requires one additional exponential operation in group G 1 . Delegate Extension If a physician is in trouble to connect to the eHealth CSS and CDS, he/she can delegate someone to download the PHI ciphertext from the CSS and request the partial decrypted ciphertext from the CDS.However, the access privilege of delegated user has to be restricted.Inspired by [20,39,40], we employ a verifiable random function to limit the access of delegated users to maximum times and propose a verifiable and traceable CP-ABE scheme with key delegation (VTDCP-ABE). The Encrypt, KeyGen, Pre-Process, Pre-Decrypt, and Trace algorithms are as well as that in VTCP-ABE. The -times delegated transmission key is set as Delegate Pre-Decrypt.The eHealth CDS first initializes a counter = 0 and a set = {Γ ,1 } for each delegated user and stores the tuple (, ) in a delegation list .Once receiving the Pre-Decrypt request from a delegated user, the CDS responds by the following way. If the above three conditions do not hold, it aborts.Otherwise, it updates ← +1 and computes the partial decryption ciphertext as Finally, the CDS responds the delegated user by = .Then the delegated user gives and to the physician. Decrypt.If the physician interacts with the CSS and CDS directly, this algorithm acts exactly as in the Decrypt algorithm of VTCP-ABE.If the physician asks a delegated user to get the ciphertext and request the outsourced decryption service, is recovered by = /( ) .The verification and PHI decryption operations are identical to that of VTCP-ABE. Since that the of physician and are kept secretly, the delegated user can not obtain any content of the PHI ciphertext except a partial decrypted ciphertext. Conclusion In this paper, we have constructed a verifiable and traceable CP-ABE (VTCP-ABE) scheme for eHealth cloud applications, which also achieves the properties of large universe and delegation.With VTCP-ABE, the patient can enforce finegrained access control over his/her PHI in a cryptographical way.Before submitting the encrypted PHI to the eHealth cloud decryption server, a pre-process on the ciphertext and transmission key is employed to preserve the identity privacy of the physician.The correctness of returned ciphertext can be efficiently verified.Moreover, the malicious physician who leaks the private key can be precisely tracked.Besides, we extend the proposed VTCP-ABE to support the delegation property, with which a resource-limited physician can authorize someone else to obtain a partial decrypted ciphertext without exposing the PHI content.The security of VTCP-ABE is proved in the selective model.The extensive experiments illustrate that our VTCP-ABE scheme efficiently achieves verifiability, traceability, and large attribute universe. 3. 1 . System Description.As shown in Figure1, our VTCP-ABE framework in the eHealth cloud mainly consists of the following components. Guess.A guesses for .A's advantage is defined as Pr ) ∉ {,⊥}.A's advantage in this game is defined as − A. We claim that a VTCP-ABE scheme is verifiable if − A is negligible for all PPT attackers in the above game.3.5.Security Game for Traceability.The traceability game of our VTCP-ABE is defined as follows.Setup.The challenger B generates (, ) and sends to the attacker A. It keeps as a secret key.Key Query.A submits the tuples ( 1 , 1 ), ( 2 , 2 ), . . ., ( , ) to B, where refers to the query number that A can make.Key Forgery.A outputs ⋆ and ⋆ .A wins if Trace (, , ⋆ , ⋆ ) ̸ = ⊤ and Trace(, , ⋆ , ⋆ ) ∉ { 1 , 2 , . . ., }.A's advantage is defined as Pr[Trace(, , ⋆ , Definition 6.We claim that a VTCP-ABE scheme is fully traceable if the advantage is negligible for all PPT attackers in the above game. The attacker A submits an access policy T * and a message * .B encrypts * under T * to obtain ( * , * ) and sends them to A. Phase 2. A repeats the key queries as in Phase 1. Output.A gives B * and an attribute set * which satisfies T * .The attacker A wins the above game if Decrypt ( * , * , * ⋆ ) ̸ = {⊤} ∪ { 1 , 2 , . . ., }]. Table 3 : The parameter length comparison.
2018-07-03T19:28:14.733Z
2018-06-06T00:00:00.000
{ "year": 2018, "sha1": "8385be66685626aacd7b4101d8406c5563a4c183", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/wcmc/2018/1701675.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8385be66685626aacd7b4101d8406c5563a4c183", "s2fieldsofstudy": [ "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Computer Science" ] }
119672781
pes2o/s2orc
v3-fos-license
Generalized Alexander Polynomial Invariants We propose an algorithm which allows to derive the generalized Alexander polynomial invariants of knots and links with the help of the q,p-numbers, appearing in bosonic two-parameter quantum algebra. These polynomials turn into HOMFLY ones by applying special parametrization. The Jones polynomials can be also obtained by using this algorithm. Introduction The aim of this paper is to generalize one-parameter Alexander polynomial invariants, one of the main characteristics of knots and links, to twoparameter generalized Alexander polynomial invariants. First, we recall some basic notions of the knot theory. Applying to an initial link (knot) L + so called "surgery" operation -elimination of a crossing -we obtain a simpler link/knot L O . Applying to the same initial link (knot) L + another "surgery" operation -switching of a crossing -we obtain another simpler link/knot L − . It is postulated: 1) every knot and link is described by the definite polynomial; 2) three concrete polynomials, namely P L + (t), P L O (t), P L − (t) are connected with the help of the following (geometro-algebraic) recurrence relation, which is called the skein relationship: where l 1 , l 2 are coefficients; 3) the normalization condition for the unknot: Applying the operation of elimination for torus knots and links L n,2 turns it into L n−1,2 , and the switching operation turns it into L n−2,2 , where n is a positive integer number. From these considerations and from (1) it follows the following recurrence relation : or in the simpler notations: P n+1,2 (t) = l 1 P n,2 (t) + l 2 P n−1,2 (t) . ( Thus, the form of the recurrence relation (3) for torus knots and links L n,2 coincides with the form of the skein relationship (1). Alexander polynomials The Alexander polynomials ∆(t) of knots and links [1] can be defined by the skein relationship From (7) (in analogy to (3)) it follows the following recurrence relation for torus knots and links L n,2 (t): From (8) (in analogy to (4)) one obtains the recurrence relation only for torus knots T (2m + 1, 2) (or for torus links L(2m, 2)) The Alexander polynomials of torus knots T (n, 2) can be expressed through q-numbers characteristic to Biedenharn-Macfarlane quantum bosonic oscillator. The bosonic q-number corresponding to an integer n is defined as [2,3] where q is a parameter. Some of the q-numbers are: The recurrence relation for (10) looks as It was found that [4,5]: or, since n = 2m + 1, as In the following section we generalize these results with the help of q−numbers. Algorithm of obtaining of Alexander polynomials from bosonic q−numbers Analyzing the results of previous sections we can formulate an algorithm of obtaining of the Alexander skein relationship (7). Afterwards this procedure will be used for obtaining another skein relations. First step: we introduce polynomials A n,2 (q), which refer to torus knots T (2m + 1, 2), satisfying following recurrence relation (repeating (11)): According to (6): Second step: we formulate full recurrence relation for all polynomials A n,2 (q) and, thus, find corresponding skein relationship. From (14) we have k 1 = q + q −1 , k 2 = −1. Because of (5), we find Therefore From (17) (in anology with (1) and (3)) we obtain the following skein relationship: Third step: we find an expression for torus knots A 2m+1,2 (q). In analogy with (19), we put Using (10), (15) and (19), we find a 1 (q) = 1, a 2 (q) = 1. Therefore, In general, we described three-step procedure of obtaining of: 1) skein relationship of knots and links, and 2) expression for polynomial invariants of torus knots T (2m + 1, 2), from structural functions of bosonic deformed oscillators. In particular, we obtained the formulas (18), (28), which coincides with those for the Alexander polynomial invariants (7), (12) (if q ≡ t). Generalized Alexander polynomials A(q, p) from q, p-numbers In this section we use the proposed three-step algorithm to obtain the generalized Alexander polynomials A(q, p) from q, p-numbers, which reduce to the Alexander polynomials if p = q −1 . The q, p-number corresponding to integer number n is introduced as [6] [n] q,p = q n − p n q − p , where q , p are some complex parameters. If p = q −1 , then [n] q,p = [n] q . Thus from normalization condition and (6) Second, from (23) it also follows From here one finds which leads to the generalized Alexander skein relationship [7]: Formula (25) can be written in the form By putting p = q −1 , the generalized Alexander skein relationship turns into the Alexander skein relationship (7). Third, we take From (24) we have a 1 (q, p) = 1, a 2 (q, p) = qp. Therefore, Generalized Alexander polynomials and HOMFLY polynomials The HOMFLY polynomial invariants [8] are described by the skein relationship: Comparing (26) with the HOMFLY skein relationship (29) we obtain Substituting this result into (29), one obtains the generalized Alexander skein relationship (26). Generalized Alexander polynomials and Jones polynomials The Jones polynomial invariants [9] can be defined as Comparing (26) with the Jones skein relationship (31), we find that substitution q = t 3 , p = t (32) reduces the generalized Alexander polynomials to Jones ones. According to results of Section 3, the Jones skein relationship (31) can be obtained with the help of the proposed three-step algorithm from q−numbers defined as [n] q 3 ,q = q 3n − q n q 3 − q . (33)
2015-10-22T10:51:49.000Z
2015-10-22T00:00:00.000
{ "year": 2015, "sha1": "87db7298e96f20e85fbafc5d1958a421e23c386f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "87db7298e96f20e85fbafc5d1958a421e23c386f", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
9111079
pes2o/s2orc
v3-fos-license
Downregulation of CD1 marks acquisition of functional maturation of human thymocytes and defines a control point in late stages of human T cell development. We have investigated whether in the human thymus transition of CD4+CD8+ double positive (DP) to CD4+ or CD8+ single positive (SP) cells is sufficient for generation of functional immunocompetent T cells. Using the capacity of thymocytes to expand in vitro in response to PHA and IL-2 as a criterion for functional maturity, we found that functional maturity of both SP and DP thymocytes correlates with downregulation of CD1a. CD1a- cells with a persistent DP phenotype were also found in neonatal cord blood, suggesting that at least a proportion of mature DP cells can emigrate from the thymus. The requirements for generating functional T cells were investigated in a hybrid human/mouse fetal thymic organ culture. MHC class II-positive, but not MHC class II-negative, mouse thymic microenvironments support differentiation of human progenitors into TCR alpha beta+CD4+ SP cells, indicating that mouse MHC class II can positively select TCR alpha beta +CD4+ SP human cells. Strikingly, these SP are arrested in the CD1a+ stage and could not be expanded in vitro with PHA and IL-2. CD1a+CD4+ SP thymocytes do not represent an end stage population because purified CD1a+CD4+ SP thymocytes differentiate to expandable CD1a- cells upon cocultivation with human thymic stromal cells. Taken together these data indicate that when CD1a+ DP TCR alpha beta low cells mature, these cells interact with MHC, but that an additional, apparently species-specific, signal is required for downregulation of CD1a to generate functional mature TCR alpha beta + cells. T cell progenitors that develop in the thymus to mature T cells are submitted to a series of selective events (reviewed in reference 1), the first of which takes place when immature CD4 Ϫ CD8 Ϫ CD3 Ϫ cells differentiate into CD4 ϩ CD8 ϩ double positive (DP) 1 cells. A second selection occurs when DP thymocytes differentiate into CD4 ϩ or CD8 ϩ mature T cells, and is generally referred to as positive selection. It is well established that positive selection involves sustained interactions of the TCR ␣␤ heterodimer with complexes of peptides and MHC antigens on thymic stromal cells (reviewed in references [2][3][4]. During this selection process, either CD4 or CD8 is downregulated. There is current debate over whether downregulation of CD4 or CD8, and thus commitment to CD4 ϩ or CD8 ϩ T cells, is dictated by the MHC specificity of the TCR (instructive model) (5,6) or whether it occurs in a stochastic fashion independent of TCR/MHC interactions (selective model) (7)(8)(9). In the majority of the studies addressing the issue of positive selection, all CD3 high thymocytes with a CD4 or CD8 single positive (SP) phenotype were considered to have completed the process of positive selection and to be functionally mature. However, recent studies in the mouse indicate that not all SP thymocytes that have been submitted to positive selection signals are functionally mature. It is known that SP cells are phenotypically heterogeneous with respect to CD24 (heat stable antigen) and CD69 (10,11). In addition, CD4 ϩ SP thymocytes with intermediate levels of CD24 express very low levels of CD8 when analyzed with a sensitive panning method (11). More recently, it has been demonstrated that although the CD4 ϩ CD8 low cells had hallmarks of positive selection such as CD69 and high levels of TCR, they were not able to induce a lethal Graft versus host disease upon transfer into irradiated allogeneic recipients and to survive in the periphery (12). The immature CD3 high CD4 ϩ CD8 low cells require the thymic environment to reach the end stage of positive selection (12). These data suggest that when functional immunocompetence of T cells is taken as the end stage of positive selection, this process is not necessarily completed when CD4 or CD8 are downregulated. Functional Maturation of Human Thymocytes Here we report on the identification of downregulation of CD1a as a hallmark for functional maturation, not only of SP human thymocytes, but also of DP cells. To arrive at this model, we made use of the observations that DP cells contain in vitro clonogenic cells both in human (13,14) and mouse (15). These observations were intriguing because if one accepts that maturity of T cells is appropriately reflected by their capacity to expand in vitro, some DP cells should have been submitted to a maturation signal. The presence of both mature clonogenic DP cells and immature CD4 ϩ SP cells (12) is difficult to reconcile with a linear model of thymocyte differentiation from immature CD3 ϩ CD4 ϩ CD8 ϩ DP via immature to mature SP cells. A possible explanation for the existence of both in vitro clonogenic mature DP thymocytes and presumably immature SP cells could be that there are bifurcations in the pathway of later stages of T cell development. The data presented here are consistent with this notion, since it was found that acquisition of functional maturity correlates perfectly with downregulation of CD1a and, most importantly, not with downregulation of CD4 or CD8. Moreover, we show here that MHC class II-positive, but not MHC class II-negative, mouse thymic microenvironments can support differentiation of human progenitors into CD3 ϩ CD4 ϩ SP cells. However, human TCR ␣␤ ϩ CD4 ϩ SP cells selected on mouse MHC class II continue to express CD1a and exhibit poor clonogenic potential in vitro, suggesting that a species-specific signal is required for downregulation of CD1a and induction of functional maturity in the CD4 TCR ␣␤ lineage. Materials and Methods Preparation and Phenotypic Analysis of Thymocyte Subpopulations. Thymocyte tissues were obtained from children 3 mo-10 yr of age undergoing median sternotomy and corrective cardiovascular surgery. Suspensions were made by mincing tissue and pressing through a stainless steel mesh. Large aggregates were removed, and the cells were washed once before separating subpopulations. To prepare CD34 ϩ subpopulations, total thymocytes were first incubated with saturating concentrations of anti-CD4 (RPA-T4), anti-CD8 (RPA-T8) (provided by Dr. G. Aversa, DNAX Research Institute, Palo Alto, CA), anti-CD69 (Leu-23, gift of Dr. J.H. Phillips, DNAX Researach Institute), and anti-CD27 (gift of Dr. R. van Lier, Central Laboratory of the Blood Transfusion Service of the Netherlands Red Cross, Amsterdam, Netherlands). The labeled cells were removed by using magnetic beads coated with sheep anti-mouse immunoglobulins (Dynal Inc., Oslo, Norway) and a samarium cobalt magnet. The cells remaining after the first depletion were labeled with anti-CD56 (L185, from Dr. J.H. Phillips), anti-CD19, and anti-CD14 (CLB CD19 and CLB CD14, respectively, from Dr. R. van Lier) to remove the NK, B, and myeloid cells, and again subjected to depletion with magnetic beads. The enriched cells were incubated with anti-CD34 FITC (HPCA-2 from Becton Dickinson, San Jose, CA) and anti-CD1a PE (T6-RD1 from Coulter Corp., Hialeah, FL). CD34 ϩ CD1a Ϫ cells were sorted with a FACStar plus ® . Purity of the cell populations was always Ͼ 98%. Three-color analyses were performed with antibodies tagged with FITC, PE, or TriColor (TRC). In some experiments biotinylated antibodies that were revealed with avidin-CyCr were used as third antibody. Cytoplasmic staining with FITC-conjugated anti-Bcl-2 mAb (clone 124; DAKO A/S, Glostrup, Denmark) was performed as described previously (16). Three-color analysis was carried out on the FACScan ® . Limiting Dilution Assays. CD1a ϩ and CD1a Ϫ DP, CD4 ϩ and CD8 ϩ SP thymocytes were plated under limiting dilution conditions in 96-well round-bottomed wells. The thymocytes were cultured in the presence of 5.10 4 irradiated (3.10 3 rad) allogeneic PBMC and 5.10 3 irradiated (5.10 3 rad) EBV transformed B cells (JY) per well in 100 l of culture medium supplemented with 0.1 g/ml of PHA (Wellcome, Beckenham, Kent, UK) and 30 U/ml of recombinant IL-2 (Chiron Europe, Amsterdam, Netherlands). Culture medium consisted of Yssel's medium (17) supplemented with 2% human serum. After 5 d of culture, 100 l fresh culture medium with 30 U/ml rIL-2 was added. Wells were screened microscopically for cell growth after 2-4 wk of culture. Hybrid Human/Mouse Fetal Thymic Organ Cultures. The in vitro development of human T and NK cells from CD34 ϩ thymocytes was studied using the hybrid human/mouse fetal thymic organ culture (FTOC) in which human progenitor cells were cocultured with murine fetal thymuses (18). These thymuses were obtained from embryos of recombination activating gene (RAG)-1-deficient mice on days 15-16 of gestation. To investigate the role of murine MHC class II antigens in development of human cells, FTOC were set up with thymuses of MHC class II-deficient mice (19), provided by Dr. L. Glimcher (Harvard School of Public Health, Boston, MA). The fetal thymuses were first precultured for 5 d in the presence of 1.35 mM 2-deoxyguanosine to remove endogenous thymocytes. Next, the thymic lobes were cocultured for 2 d in hanging drops in Terasaki wells with FACS ® sorted human progenitor cells, transferred to nucleopore filters which were layered over gelfoam rafts in 6-well plates, and cultured for the indicated number of days. Culture medium consisted of Yssel's medium supplemented with 2% normal human serum and 5% fetal calf serum. To analyze differentiation of human cells, the mouse thymuses were dispersed into single cell suspensions and stained with mAbs specific for human cell surface antigens. Isolation of Stromal Cells. Heterogeneous cell cultures of thymic stroma were obtained for mechanic disruption of the thymic parenchyma and enzymatic treatment with collagenase and lipase and enrichment for large adherent cells (20,21). Adherent cells were cultured in RPMI-1640 (GIBCO BRL, Gaithersburg, MD) supplemented with 10% FCS. The cultures were washed each day during the first days of culture to remove any remaining thymocytes. Stromal cells were used after two or three passages. RNA Isolation and cDNA Preparation. Total RNA was isolated from sorted cells using the guanidine thiocyanate method (22). Glycogen (20 g; Boehringer Mannheim, Indianapolis, IN) was added to each sample to facilitate precipitation of the RNA. The cDNA was prepared with oligo(dT) 15 (PharMingen, San Diego, CA) and reversed transcribed with 200 U M-MLV reverse transcriptase (GIBCO BRL). Dilutions of the cDNA in water (5 ngeq RNA/ l) were used in PCR amplification reactions. Semi-quantitative PCR. A semi-quantitative PCR method (23,24) was used to compare the expression of RAG-1 in thymocyte subpopulations. Synthetic oligonucleotides (Pharmacia LKB Biotechnology Inc., Piscataway, NJ) used as primers are as follows: . Standard curves for HPRT and RAG-1 were set up using serial dilutions of cDNA prepared from RNA isolated from total thymus. Dilutions of cDNA samples in water were made starting with a concentration of 0.5 ngeq RNA/ l to determine HPRT expression and 5 ngeq RNA/ l to determine RAG-1 expression. PCR was carried out in a total volume of 50 l consisting of 1 M of each primer set, 200 M each dNTP (Pharmacia LKB Biotechnology Inc.), 2.5 mM MgCl 2 , 1 ϫ PCR buffer, 1 U Taq DNA polymerase (GIBCO BRL), and 10 l of the cDNA. Samples were covered with 50 l paraffin oil and heated to 94 Њ C for 5 min and then amplified for 30 cycles of 1 min at 94 Њ C, 1 min at 65 Њ C, and 2 min at 72 Њ C. After the last cycle, a final extension step at 72 Њ C for 10 min was done. 10 l of each PCR reaction was dot blotted on a nylon filter (Hybond N ϩ ; Amersham Intl. Buckinghamshire, UK). Filters were prehybridized at 55 Њ C for at least 2 h (6 ϫ SSC, 0.5% SDS, 5 ϫ Denhardt's, and 100 mg herring sperm DNA per liter), and hybridized overnight with an oligoprobe recognizing specifically the HPRT or RAG-1 PCR product internal to the PCR primers. Oligoprobes were 32 P labeled according to the manufacturer's recommendations (Boehringer Mannheim). Sequences of the probe are as follows: 5 Ј -GTCCCCTGTTGACTGGTCATT-ACAAT-3 Ј (HPRT probe), 5 Ј -TCCTTTGAAAAGACACC-TGAAGAAGC-3 Ј (RAG1 probe). To remove any nonspecifically bound probe, the filters were washed with excess amount of 2ϫ SSC; 0.1% SDS at 50ЊC. Quantitation of the PCR products was done with a phosphoimager (Fujix Bas 2000; Fuji Photo Film Co., Ltd., Tokyo, Japan) and analyzed with the supplied software. Finally, the ratio of RAG-1/HPRT was calculated to compare the expression of the RAG-1 mRNA in the different samples. Results Identification of Clonogenic CD4 ϩ CD8 ϩ DP and CD4 ϩ or CD8 ϩ SP Thymocytes in the Human Thymus. CD1a is a marker that is expressed on the great majority of DP thymocytes and part of the SP cells (25). Since this marker is not present on mature peripheral T cells, it is generally assumed that thymic emigrants are CD1a Ϫ , and that therefore, CD1a ϩ thymocytes are immature. Since a proportion of the SP cells is CD1a ϩ , a linear model of differentiation predicts that virtually all DP cells would be CD1a ϩ . A nonlinear differentiation model, however, would predict existence of CD1a Ϫ cells among both DP and SP thymic populations. To examine this issue, we performed three parameter flow cytometric analysis of CD1a, CD4, and CD8, which confirmed earlier data that the vast majority of DP cells, and around 40% of the SP cells, express CD1a (Fig. 1). A very small percentage of the DP cells, however, is clearly negative for CD1a. Interestingly, the levels of CD4 and CD8 on the CD1a Ϫ DP cells are lower than on total DP thymocytes ( Fig. 1), suggesting that downregulation of either one of these coreceptors had already been initiated before completion of CD1a downregulation. To address whether the CD1a Ϫ cells are functionally mature, we performed a limiting dilution of CD1a Ϫ and CD1a ϩ cells. Table 1 shows that the cloning efficiencies of CD1a Ϫ DP, CD4 ϩ , and CD8 ϩ SP thymocytes in one representative experiment were 24, 41, and 34%, respectively. In sharp contrast, virtually none of the CD1a ϩ DP or CD1a ϩ SP cells could be cloned. The lack of clonogenic potential of CD1a ϩ subsets was not due to an inhibiting effect of the anti-CD1a antibody, since cloning efficiencies of unseparated SP cells plated in presence or absence of anti-CD1a mAb were virtually identical (results not shown). The in vitro expandable DP thymocytes could be cloned, and the majority of the DP clones maintained their DP phenotype for a prolonged (13,14). These data conclusively demonstrate that the clonogenic potential of SP and DP thymocytes resides exclusively in the CD1a Ϫ subset, and that functional maturation, as defined by the ability to clonally expand, can already be manifested at the DP stage of thymocyte maturation. Characterization of Immature CD1a ϩ SP Thymocytes. The results of the limiting dilution experiments indicate that CD1a ϩ CD3 high SP thymocytes are functionally immature. The observation that the great majority of the CD3 high cells in the thymus express the activation marker CD69, which is induced after positive selection (26)(27)(28) suggests, however, that positive selection signals have been delivered to a significant proportion of the CD1a ϩ SP cells. To further substantiate whether the CD1a ϩ SP cells have been submitted to selection signals, we investigated not only expression of CD69, but also Bcl-2 which is associated with positive selection as well (29,30). In addition, expression of CD27 was analyzed. CD27 is expressed on most CD3 high human thymocytes, and may also be associated with positive selection (31). Three parameter analysis of CD1a ϩ SP cells confirms that the majority of these cells express CD69, Bcl-2, and CD27 (Table 2). These data are consistent with expression profiles of CD69 (25), Bcl-2 (32), and CD27 (31) on total human CD3 high thymocytes published previously. Besides upregulation of CD69, positive selection also results in downregulation of RAG-1 (28,33). A semi-quantitative reverse transcriptase-PCR of the CD1a ϩ and CD1a Ϫ SP populations revealed that CD8 ϩ CD1a ϩ cells still express levels of RAG-1 which are comparable to that of total thymocytes (Fig. 2 A). The levels of RAG mRNA in CD1a ϩ CD4 ϩ SP cells, however, are much lower than that of total cells (Fig. 2 B). Taken together, these data can be interpreted to indicate that CD1a ϩ SP cells express some, but not all, features of cells that have received a TCR-mediated positive selection signal. Development of Human CD4 ϩ SP Cells in a Mouse Fetal Thymus Requires Mouse MHC Class II Antigens, but the Mouse Thymus Is Inefficient at Inducing Maturation of Human CD4 ϩ SP T Cells. Recently it was demonstrated that human progenitor cells can develop in mouse thymic organs in a FTOC (18,(34)(35)(36)(37). Human progenitor cells developed into SP cells, but human stromal cells were not detectable in such cultures (36). Human CD4 can replace mouse CD4 in development of mouse MHC class II-restricted T cells (38). To address the question of whether interaction of human CD4 with mouse MHC class II can select human CD4 ϩ T cells, FTOC were performed with thymi from MHC class II-positive and MHC class II-deficient mice. The mouse thymi were reconstituted with CD34 ϩ CD1a Ϫ postnatal thymocytes that include the most primitive thymic T cell precursors (39,40). After incubation in a MHC class II-positive murine thymic microenvironment, 6.5% of the harvested cells were TCR ␣␤ ϩ CD4 ϩ SP (Fig. 3 A). By contrast, the number of TCR␣␤ ϩ CD4 ϩ SP T cells recovered from thymi of MHC class II-deficient mice was reduced considerably to 0.46% in experiment 1 (Fig. 3 B) and 0.05% in experiment 2 (Fig. 3 C), compared to that recovered from thymi of MHC class II-positive mice (6-10% in four independent experiments). A significant portion of the TCR␣␤ ϩ cells that developed in a class II MHC-positive thymic environment expressed CD69 (Fig. 3 D), indicating that some cells were activated, presumably as a consequence of selection via the TCR. These findings indicate that mouse MHC class II antigens can positively select human CD4 ϩ cells. It is noteworthy that very few human CD8 ϩ TCR␣␤ ϩ SP cells could be recovered from the FTOC with human CD34 ϩ CD1a Ϫ cells and the mouse thymi. There were more CD3 ϩ CD8 ϩ SP cells present and Ͼ90% of those cells express TCR ␥␦ (results not shown). One possible reason for this is that mouse MHC class I does not efficiently select human CD8 ϩ T cells, despite the fact that human CD8 is able to functionally interact with the ␣3 domain of mouse H2Kb (41). Another explanation is that in addition to class I MHC, other signals are required for selection of CD8 ϩ T cells. Having established that MHC class II antigens can support development of human CD4 ϩ SP T cells, we next investigated whether the mouse thymic environment can induce downregulation of CD1a and functional maturation. Early thymic CD34 ϩ CD1a Ϫ progenitors were isolated and cultured in FTOC for 4 wk. Analysis of the cells harvested from the FTOC revealed the presence of TCR ␣␤ and TCR ␥␦ high cells. Almost all TCR ␣␤ high cells expressed CD1a, while most TCR ␥␦ high cells lacked CD1a (Fig. 4). Immature TCR ␥␦ dim cells mostly expressed CD1a (Fig. 4). Stimulation of the cells harvested from the FTOC with a feeder cell mixture, PHA, and IL-2 resulted in expansion mostly of TCR ␥␦ ϩ cells and few TCR ␣␤ ϩ cells (Fig. 5). Most of those TCR ␣␤ ϩ cells that were expanded expressed CD4 (Fig. 5). These data demonstrate that although the mouse MHC class II-positive mouse thymic environ- ment can support development of CD4 ϩ SP thymocytes, it is very inefficient in induction of functional maturation of these cells. By contrast, the mouse thymic microenvironment efficiently induces maturation of TCR ␥␦ ϩ cells. Thus, failure of the mouse MHC class II-positive environment to induce functional maturation of TCR ␣␤ ϩ cells is not due to a intrinsic incapability to support maturation of human T cells. Differentiation of CD1a ϩ to CD1a Ϫ CD4 ϩ SP Cells. The presence of CD1a ϩ and CD1a Ϫ SP thymocytes raises the question whether CD1a ϩ SP cells are the direct precursors of CD1a Ϫ SP cells. An alternative possibility would be that the CD1a Ϫ SP cells are derived from the CD1a Ϫ DP cells and that CD1a ϩ SP cells represent a dead-end lineage. To investigate this, we cocultured purified CD1a ϩ CD4 ϩ SP cells with short term cultured human thymic stromal cells. This coculture resulted in a gradual downregulation of CD1a which was completed on day 12 (Fig. 6 A). Phenotypic analysis of these cells reveals that they express high levels of TCR ␣␤ and CD4. Unexpectedly, many of these cells also express CD8␣ (Fig. 6 B). These differentiated CD1a Ϫ cells could be expanded in vitro and the phenotype did not alter upon expansion (Fig. 6 B). These data indicate that CD1a ϩ CD4 ϩ SP cells can differentiate to expandable CD1a Ϫ CD4 ϩ SP cells and also to expandable CD1a Ϫ CD4 ϩ CD8␣ ϩ cells. Presence of DP Cells in Neonatal Cord Blood. As indicated in Fig. 1, the thymus contains expandable CD1a DP cells. Although not shown here, we were able to clone DP cells and these clones maintained a persistent DP phenotype upon long-term culture in accord with data published (13,14). The presence of DP cells, expressing several characteristics of maturity, raises the question whether these cells are able to migrate from the thymus. DP cells could be observed in T cell samples from neonatal cord blood (Fig. 7) in percentages ranging from 0.5 to 3% of the total number of CD3 ϩ T cells (n ϭ 4). All DP cord blood cells express CD3 and CD27, and they lack CD1a or the activation antigen CD69 (Fig. 7). Further analysis of these cells indicate that the majority of these cells express CD45RA, and are negative for CD45RO and Fas (Fig. 7) suggesting that these DP cells represent naive, not memory, cells. Moreover, like in the thymus (42), both CD8␣ ϩ ␤ Ϫ CD4 ϩ and CD4 ϩ CD8␣ ϩ ␤ ϩ cells could be observed. These observations are compatible with the notion that DP cells can migrate out of the thymus. Discussion In this paper we have investigated acquisition of functional maturity by human thymocytes. The hypothesis that forms the basis of this study is that maturity of T cells is appropriately and faithfully reflected by their capacity to expand in vitro. We think that this is the case because in vitro expandability is a general property of mature peripheral T cells. Moreover, T cell clones derived from mature thymocytes can mediate cytotoxic activities and produce cy-tokines upon stimulation in vitro (data not shown). Accepting our premise, the data argue that some DP are mature, while a considerable proportion of the SP cells in the human thymus are functionally immature. Most importantly, acquisition of functional maturity correlates with downregulation of CD1a. The cognizance that CD1a marks immature cells within thymocyte subpopulations allowed a meaningful and detailed analysis of these cells and a comparison with functionally mature thymocytes. Our findings that CD1a ϩ SP cells are not clonogenic in vitro confirm and extend findings of Vanhecke et al. who investigated the clonogenic potential of CD4 ϩ SP human . CD1a ϩ CD4 ϩ SP thymocytes differentiate into CD1a Ϫ cells upon coculture with thymic stromal cells (A), and part of these cells upregulate CD8 expression (B). Thymocytes were labeled with CD1a FITC, CD4 PE, and CD8 TRC. CD1a ϩ and CD1a Ϫ CD4 high SP cells were sorted and part of the cells were used to check CD3 expression by staining with CD3 TRC (all CD1a Ϫ and Ͼ99% of CD1 ϩ CD4 SP thymocytes were CD3 ϩ ). In experiment A, 2 ϫ 10 5 CD1a ϩ CD4 SP (Ͼ98% purified) and 2 ϫ 10 5 CD1a Ϫ CD4 SP cells were cultured on a monolayer of human thymic stromal cells. After 4 and 12 d, cells were tested for CD1a expression. The cell numbers of wells started with CD1a ϩ and CD1a Ϫ CD4 SP cells were both reduced to 8 ϫ 10 4 after 4 d, whereas at day 12, 2.5 ϫ 10 4 , and 7.0 ϫ 10 4 (CD1a Ϫ ) cells were recovered from the cultures started with CD1a ϩ and CD1a Ϫ CD4 SP thymocytes, respectively. In experiment B, sorted CD1a ϩ and CD1a Ϫ CD4 SP thymocytes were cultured for 7 d on a monolayer of thymic stromal cells, assayed for CD4 and CD8 expression, and expanded in vitro with feeder cells, PHA, and IL-2. Expanded cells were also analysed for CD4 against CD8 expression. thymocyte subsets (42). The CD1a ϩ CD4 ϩ SP thymocytes acquire the capacity to expand in vitro after cocultivation with short term cultured thymic stromal cells. This acquisition paralleled downregulation of CD1a. The observations identifying functionally immature CD1a ϩ CD4 ϩ SP cells are compatible with recent findings in the mouse by Dyall et al. (12) who demonstrated that a proportion of CD4 ϩ SP murine thymocytes are functionally immature by several criteria. In many respects, the immature CD1a ϩ CD3 ϩ CD4 ϩ SP thymocytes in the human thymus are similar to the functionally immature CD3 ϩ CD4 ϩ SP in the mouse thymus (12). The immature mouse CD4 ϩ SP thymocytes can be distinguished from mature cells by virtue of their expression of CD24 and high levels of CD69 (12). CD8 was not detectable by fluorimetric analysis, but the fact that the immature CD4 ϩ SP mouse cells can be retained on anti-CD8 immobilized on plastic indicates that low levels of CD8 are present on the immature CD4 ϩ SP cells (12). Similar to the immature CD3 ϩ CD4 ϩ SP population in the mouse, the human CD1a ϩ CD4 ϩ cells express CD69 and Bcl-2, indicative for cells that have been submitted to a positive selection signal (26)(27)(28)(29)(30). In addition, most CD1a ϩ CD4 ϩ SP cells express CD27, which may be another marker that is induced by positive selection (31,42,43). Human CD1a ϩ CD3 ϩ CD4 ϩ SP cells expressed RAG-1, though in lower levels than total thymocytes. However, our inability to analyse RAG expression in individual cells precludes consideration of the possibility that a proportion of CD4 ϩ CD1a ϩ SP cells are negative for RAG-1. Also, the CD8 ϩ SP population in the human thymus contains functionally immature CD1a ϩ cells. The observation that heat stable antigen ϩ CD8 ϩ SP cells are present in the mouse thymus (10) suggested that there are immature cells also within the CD8 ϩ SP thymic population, but the functional activity of those cells was not tested. The immature human CD1a ϩ CD8 ϩ SP cells are similar to the CD1a ϩ CD4 ϩ SP cells in that the majority expresses CD69, Bcl-2, and CD27, but differ in expression levels of RAG-1, which are much higher than on CD1a ϩ CD4 ϩ SP cells. Although the expression of CD27, CD69, Bcl-2, and high levels of CD3 on part of the immature CD1a ϩ DP and almost all CD1a ϩ SP thymocytes indicates that these CD1a ϩ cells have been submitted to a positive selection signal, it is clear that a transition to CD1a Ϫ cells is required to confer functional maturity to these thymocytes. Two possible mechanisms for the discrepancy between upregulation of CD27, CD69, and Bcl-2, and the CD1a ϩ to CD1a Ϫ switch can be put forward. One is that downregulation of CD1a and acquisition of functional competence requires a much greater sustained MHC/TCR interaction than induction of CD69. This idea would take into consideration data from mouse studies indicating that consecutive, or perhaps even continual, TCR/MHC interactions are required to complete positive selection (44)(45)(46). A second possibility is that MHC/TCR interactions are sufficient for upregulation of CD69, but that an additional signal is required for downregulation of CD1a and acquisition of functional maturity. Our experiments with the hybrid human/mouse FTOC system provides support for the notion that two signals are required for induction of the functional program in immature thymocytes. We observed that although interactions of mouse MHC class II antigens with human CD4 and TCR drive generation of CD4 ϩ SP cells, the mouse thymic microenvironment is very inefficient in downregulating CD1a and inducing functional maturation of CD4 ϩ TCR ␣␤ T cells. The inefficiency of mouse thymic microenvironment to induce functional maturity in CD4 ϩ TCR␣␤ ϩ T cells is not due to, for example, tis- Figure 7. Three parameter analysis of neonatal cord blood cells. Cord blood lymphocytes were obtained by centrifugation over Lymphoprep (Nycomed Pharma, Oslo, Norway). Monocytes/macrophages and contaminating erythrocytes were depleted with goat anti-mouse IgG coated magnetic beads (Dynal Inc.) using monoclonal antibodies against CD14 and glycophorin. The remaining cells were stained with CD4 PE and CD8 TRC against the indicated FITC conjugated antibodies. sue culture conditions since TCR ␥␦ cells mature efficiently in the mouse FTOC. Moreover, we observed that cocultivation of CD1a ϩ CD4 ϩ SP cells with human thymic stromal cells results in differentiation to CD1a Ϫ cells. Taken together, these observations raise the possibility that species-specific activating or costimulatory molecules are required for efficient maturation of human CD4 ϩ T cells. In this paper, we confirm and extend earlier findings (13,14) that the human thymus contains in vitro expandable DP cells. In vitro expandable DP thymocytes have also been found in the mouse (15). In vitro expandability of human DP thymocytes correlates perfectly with completion of downregulation of CD1a, as was also the case for SP cells. Our observations that DP cells are present in the periphery of neonates suggest that some mature CD1a Ϫ DP cells may migrate out of the thymus. Most of these peripheral DP cells have characteristics of naive cells in that they express CD45RA and are negative for CD45RO and Fas, which are selectively expressed on memory cells. As was also found within the mature DP thymocytes, cord blood DP T cells lack CD1a and a proportion lacks CD8␤ as well. The characteristics of these DP cord blood cells make it unlikely that they are derived from peripheral SP cells that have upregulated CD4 or CD8 due to activation. It seems, therefore, that the CD4CD8 phenotype becomes stable once the DP cells have been submitted to a maturation signal. The fact that cloned lines of DP thymocytes with sustained CD4CD8 phenotype can be established is consistent with this notion. It is relevant to note that cloned lines of DP T cells have been established from peripheral T cells (47). It is possible that those clones originated from cells that emigrated from the thymus as DP cells. The presence of both mature clonogenic DP cells and immature SP cells is difficult to reconcile with a generalized linear model of thymocyte differentiation from immature TCR␣␤ ϩ CD4 ϩ CD8 ϩ DP cells via immature to mature SP cells. It is possible that for some thymocytes, completion of positive selection and acquisition of functional competence can occur at the DP stage, while for others this could happen at the SP stage. However, at least part of the CD1a Ϫ DP cells could be derived not from CD1a ϩ DP cells but from CD1a ϩ CD4 ϩ SP cells. This is suggested by the experiments in which CD1a ϩ CD4 ϩ SP thymocytes were cocultured with short term cultures of postnatal stromal cells. The cells harvested from such cocultures lacked CD1a and expressed CD3 and CD4, but a large proportion of these cells coexpress CD8␣. This phenotype persisted after expansion of these cells. It is also possible that some of the CD1a Ϫ DP cells are derived from CD1a Ϫ SP cells as suggested by the experiments depicted in Fig. 6. Finally, we have considered the possibility that the CD1a Ϫ DP cells are derived from cells that never expressed CD1. This cannot be excluded; however, we consider this unlikely since virtually all CD3 Ϫ CD4 ϩ immature SP cells considered to be the precursors of the DP cells, express CD1a (39). Further experiments are needed to elucidate the differentiation patterns of thymocytes after being submitted to positive selection mediated by the MHC/TCR interaction.
2014-10-01T00:00:00.000Z
1997-01-06T00:00:00.000
{ "year": 1997, "sha1": "b37a678bdd71d9a494547f29f709e85372a4f9ef", "oa_license": "CCBYNCSA", "oa_url": "http://jem.rupress.org/content/185/1/141.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "b37a678bdd71d9a494547f29f709e85372a4f9ef", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
228989214
pes2o/s2orc
v3-fos-license
Investigating Major Drivers of Performance in Community Water Projects : A Case of Water Projects in Saku Sub County , Marsabit County , Kenya DOI: http://dx.doi.org/10.24018/ejbmr.2020.5.5.530 Vol 5 | Issue 5 | October 2020 1 Abstract — The purpose of this study was to determine the major drivers of performance in community water projects – a case of water projects in Saku sub county, Marsabit County, Kenya. The study investigated; the role of management planning, availability of funding, community participation and projects governance policies on performance of community water projects in Saku Sub County. The study used Community Development theories’ and adopted a descriptive research design. The target population for this study was 106 with a sample of 84 respondents. The study use questionnaire where quantitative data was analyzed using SPSS 25.0, qualitative data using thematic content and inferential data using multiple regression. The research found that stakeholder involvement and planning all levels of project implementation influence performance of community water projects in Saku Sub County in Marsabit County, Kenya to a great extent. The study further found that trained, adequate human resource influence performance of community water projects to a very great extent. The research found that frequency of meetings; project ownership; and level of involvement influence performance of community water projects to a great extent. The study concluded that management planning had the greatest influence on performance of community water projects followed by funding, and then project governing policies while community participation had the least influence on the performance of community water projects. The study recommends inclusive planning at all levels of the project cycle and request the government develops mechanisms to curb corruption occurrences especially in the face of project implementation. I. INTRODUCTION Development is a concept that is of great concern to communities and the globe has embraced this agenda with not only the implementation of Millennium Development Goals (MDGs) of 2000 but also the sustainable development goals of 2015. The United Nations' defines community development as the process that is meant to provide conditions of economic and social progress for the entire community. In 107 (Act No. 11) tasked with ironing out regional imbalances brought about by patronage politics [1]. CDF provides funds to constituencies through the respective members of the National Assembly. County Governments have the objective of helping to improve the livelihood of the locals either through direct participation or providing funding to supplement the national government's allocation to the various sectors. Most of these funds provided by County Government are project driven short-term funds, which do not factor in the whole funding mechanism policies that will ensure that such projects become sustainable after the county funds have been withdrawn. To ensure project performance, it is crucial to have well thought out strategy that only looks at how a Community water projects is completed, but also the means to continue with the project after the county funds have been withdrawn [2]. A. Statement of the Problem Community water projects in Marsabit County have not been performing well where cases of mismanagement of resource due to malpractice have been reported. Cases of delayed completion of the projects have been reported in the county citing various challenges that such as delays in involving experts from the community during the initiation of community development based projects towards success [3]. Some of the community water projects in Saku Sub County in Marsabit County have not been completed since 2016 to date due the financial challenges which are caused by reduced funding from the donors, mismanagement of resource by the management committees and lack of clear governing policies to implement the project (Galm Qampise program officer-Kivulini Trust). Despite the poor performance community development based project in Saku Sub county in Marsabit County in Kenya, there is scarce literature in the Kenya done in the sub county. Most of the available literatures focus on other counties. For example, Karithi [4] examined factors influencing performance of community water projects in Tigania Central District, Meru County, Kenya, Cheruiyot [5] examined factors influencing perfomance of community based water projects in Bomet County and Githua [6] assessed the factors influencing performance of community water projects in Njoro Sub County. This study was therefore essential to the community members of the Marsabit county whose projects seem not to last long enough to serve them. Hence this study sought to bridge these gaps and establish factors influencing the performance of Community water projects in Saku Sub To evaluate the influence of community participation on the performance of Community water projects in Saku Sub County in Marsabit County, Kenya. iv. To determine the influence of the projects governance policies on the performance of Community water projects in Saku Sub County in Marsabit County, Kenya. A. Performance of Community Water Projects According to Richardson [7], the performance of CWPs is considered in relation to achievement of project set objectives in the constraints of time, cost and quality. During project implementation performance indicators inform the project team on the project's progress as it gears towards achieving the ultimate goals and/or objectives. The performance of CDPs in view of the time and cost incurred and the quality to show can also be influenced by external factors. According to Burke [8], failure to plan in project management has a ripple effect on a project's survival that remains uncontrollable until it has been dealt with from the basics. It is a project plan that shows a project's end from the beginning. According to their study Usman, Kamau and Mireri [9] state that the inability to implement governing policies is a major setback to project performance in developing countries. Policies can have a positive or negative influence on project performance. B. Management Planning and Performance of Community Water Projects The planning process of management is a central endeavor of project management. The contribution of the planning process to project performance is major as planning forms the foundation on which the entire project rests. It provides a clear picture of the project that is the project scope, its beginning, its means and its end. It outlines and describes the project activities, how they will be accomplished and the expected outcome or end products [10]. The main purpose of the planning process is to identify and define major project tasks, estimate time and resources required to carry them out and come up with a framework for managing reviewing and controlling the project activities. This study seeks to bridge this gap in previous studies by establishing the effect of management planning on the performance of community water projects. C. Funding and Performance of Community Water Projects Karithi [4] in his study on factors influencing performance of community water projects in Tigania Central Sub-County established that more rural people were involved in addressing their own development, the study failed to point out the effect of funding on performance of community water projects. In addition, Odoyo [11] in his review on the factors affecting performance of community water projects in Kenya failed to establish the effect of funding on performance of community water projects. This study therefore seeks to bridge these gaps by establishing the effect of funding on the performance of community water projects. D. Community Participation and Performance of Community Water Projects Kaufman and Poulin [12] states that the involvement of community members in community initiatives is a requirement that cannot be ignored owing to the fact that these projects are by the communities and for the communities. The involvement emanates right from project initiation, execution and closure. Studies by Maimuna [13]and Njogu [14] who studied factors influencing the performance of community water project in isiolo and meru county respectively failed to highlight the effect of community participation on performance of community water projects. This study therefore seeks to bridge these gaps by establishing the effect of community participation on the performance of community water projects. E. Project Governance Policies and Performance of Community Water Projects Studies by Oyugi [15] and Usman, Kamau and Mireri [9] reported that government policies and procedures in Nigeria put in place to guide in the national development initiatives have not been effectively implemented. However, these studies failed to show how project governing policies affect performance of community water projects. This study therefore seeks to bridge these gaps by establishing the effect of project governing policies 'on the performance of community water projects. F. Theoretical Framework 1. Community Development Theories This theory originated from the work of Lewin [15] whose theory stated, 'people support what they help create. ' The theory was applicant to the study as it upheld the place of community in any involvement. Its strength is on the fact that it proves the participation of people in initiatives and therefore relevant to Community water projects. It however does not outline how the participation and close involvement from the beginning is done. This study specifically utilize Stages of Community Development Groups theory [17] which considers groups or organization as community that must go through four stages of pseudo where conflict are avoided, chaotic stage, emptiness where members embrace the need to work for interest of group and finally authentic stage where community enhance their understanding and proceed to achieve progress. A. Research Design The study adopted a descriptive research design. Creswell and Plano Clark [18] states that descriptive research determines and reports the way things are. Descriptive research design was therefore significant in this study as it informed the researcher of the exact position of the phenomenon that is being studied without altering its state. The description in the research design sought to answer such questions as what, how, when, and where. B. Sample Size and Sampling Procedure Bryman and Bell [19] state that the rule of the thumb in sampling is to obtain as big a sample as possible. Taking a population size of 106, the researcher adopted the Ya-mane Taro formula to get a sample of 84 respondents. The study first used proportional allocation where each population group (Np) was divided by the total populace (N) to get a ratio in which each group was picked at and later multiplied by the sample size gotten. That is for instance, for project officers the sample was 9/106×84=7. Stratified sampling methods were also used for the selection of the study respondents. C. Research Instrument For this study, the researcher made use of questionnaires in the gathering of primary data. The choice of this instrument is informed by its advantages such as it is free from the bias of the interviewee and respondents had ample time to give well thought out answers. D. Pilot Testing of Instruments The pilot study enabled the researcher to probe the feasibility of the methods and procedures that were used in the main study. The accuracy of data to be collected is largely dependent on the data collection instruments in terms of validity and reliability which can only be established through a pilot test [20]. E. Validity of the Research Instrument The researcher used Face validity to study's questionnaire with the help the researcher's supervisor, giving it a subjective overview. Further, the current study also looked into the content validity of the choice research tools through persistent consultations with raters from University of Nairobi with respect to readability, clarity and comprehensiveness of measurement on the construct of interest. F. Reliability of the Research Instrument The study embraced the use of internal consistency technique employing Cronbach Alpha to examine the reliability of research questionnaire that was utilized in the current research study. G. Data collection procedure Data was collected using a questionnaire with both open ended and closed ended questions structured to meet the objectives of the study and administered to the respondent through the research assistant. H. Data Analysis Technique Data was analyzed using Statistical Package for Social Sciences (SPSS Version 25.0). Referencing of all received questionnaires was done and coding of questionnaire items was done for facilitating data entry. After data cleaning which entailed checking for errors in entry, descriptive statistics such as frequencies, percentages, mean score and standard deviation was estimated for all the quantitative variables and information presented inform of tables. The qualitative data from the open-ended questions was analyzed using thematic content analysis and presented in narrative form. Inferential data analysis was done using multiple regression analysis. Multiple regression analysis was used to establish the relations between the independent and dependent variables. I. Ethical Considerations To conduct this study, the researcher sought both an introductory letter from the graduate school, University of Nairobi to ascertain that he was a bona fide student and a permit from the National Commission for Science, Technology and Innovation (NACOSTI). Permission was also sought from intended respondents to indicate their willingness to participate and their anonymity when it comes to answering the research instruments was upheld. A. Respondents' Gender The findings revealed that 55.3% of the respondents were male while 44.7% were female. B. Respondents' Age Bracket The findings show that 27.0% of the respondents were aged 46 yrs. and above, 26.3% were aged between 26-35 yrs., 25.7% were aged between 18-25 yrs. while 21.1% were aged between 36-45 yrs. The finding from data analysis shows that 34.2% of the respondents indicated that management planning influences the performance of community water projects in Saku Sub County in Marsabit County, Kenya to a great extent, 32.2% indicated to a very great extent, 13.2% indicated to low extent, 12.5% indicated not at all and 7.9% indicated to a moderate extent. This implies that management planning influences the performance of community water projects in Saku Sub County to a great extent. The finding form data shows that 34.9% of the respondents indicated that funding influences performance of community water projects in Saku Sub County in Marsabit County, Kenya to a great extent, 27.0% indicated to a very great extent, 15.1% specified that not at all, 13.2% indicated to a moderate extent and 9.9% indicated to a low extent. This implies that funding influences performance of community water projects in Saku Sub County in Marsabit County, Kenya to a great extent. The findings from data analysis show that the respondents indicated that frequency of meetings as shown by a mean of 4.046; project ownership as shown by a mean of 3.763; and level of involvement as shown by a mean of 3.559 influence performance of community water projects in Saku Sub County of Marsabit County to a great extent. The respondents further indicated that decision making as shown by a mean of 3.467 influences performance of community water projects in Saku Sub County of Marsabit County to a moderate extent. The findings revealed that 38.2% of the respondents indicated that projects governing policies influence performance of community water projects in Saku Sub County in Marsabit County to a great extent, 30.9% indicated to a very great extent, 13.8% indicated to a moderate extent, 9.2% indicated to a low extent, 7.9% indicated not at all. This implies that that projects governing policies influence performance of community water projects in Saku Sub County in Marsabit County to a great extent. H. 4Performance of Community Water Projects The study further required to know the trend of aspects of performance of community water projects in Saku Sub County of Marsabit County for the last 5 years. The findings show that the respondents indicated that completion with set budget as presented by a mean score of 3.836; satisfaction of community members as presented by a mean score of 3.743; realization of set objectives as presented by a mean score of 3.684; and completion in set time as presented by a mean score of 3.605 have improved for the last 5 years. From the findings, the independent variables were statistically significant predicting the dependent variable since adjusted R square was 0.719. This implied that 71.9% of variations in performance of community water projects in Saku Sub County in Marsabit County, Kenya are explained by management planning, funding, community participation and project governing policies. Other factors influencing performance of community water projects in Saku Sub County in Marsabit County, Kenya that were not covered in this study accounted for 28.1% which form the basis for further studies. From the ANOVA, p-value was 0.000 and F-calculated was 47.076. Since p-value was less than 0.05 and the F-calculated was greater than F-critical (2.5066), then the regression relationship was significant in determining how management planning, funding, community participation and project governing policies influenced performance of community water projects in Saku Sub County in Marsabit County, Kenya. I. Multiple Regression Analysis The regression equation established that taking (management planning, funding, community participation and project governing policies) at constant, performance of community water projects in Saku Sub County will be 1.267. The findings presented also show that increase in the management planning leads to 0.821 increase in the score of performance of community water projects in Saku Sub County if all other variables are held constant. This variable was significant since the p-value 0.027<0.05. V. DISCUSSION OF FINDINGS The research found that stakeholder involvement and planning all levels of project implementation influence performance of community water projects in Saku Sub County in Marsabit County, Kenya to a great extent. The study further found that trained, adequate human resource influence performance of community water projects to a very great extent. The research found that frequency of meetings; project ownership; and level of involvement influence performance of community water projects to a great extent. VI. CONCLUSIONS AND RECOMMENDATION The study concludes that management planning has a positive and significant influence on the performance of performance of Community water projects in Saku Sub County in Marsabit County, Kenya. The study concluded that financial management mechanisms such as requirements for detailed proposals with clear objectives and goals for the use of funds; prioritization of projects funded within the budgets and strategic plans are to be upheld. The study concluded that funding has a significant influence on the performance of Community water projects in Saku Sub County in Marsabit County, Kenya. The study deduces that appropriate controls and safeguards should also be put in place to prevent the misuse and inappropriate application of finance appropriated and given as conditional and unconditional grants. Some of the controls in question include audit and budgeting. The study further concluded that community participation influences the performance of community water projects in Saku Sub County in Marsabit County, Kenya. The study deduces that it is necessary for project teams to involve the community in all aspects of the community water project. Community participation ensures strong support for effective performance of the community project. Further, emphasis on community participation in the development and management of community water projects is a sure sign that the project has a bright chance of functioning optimally on a sustainable basis. The study further concluded that projects governing policies have a significant and positive influence on the performance of community water projects in Saku Sub County in Marsabit County, Kenya. The study concluded that that policy making and implementation that involves key development practitioners bears much and vices like corruption and lack of constant policy updates affects these processes. The study recommends inclusive planning at all levels of the project cycle and request the government develops mechanisms to curb corruption occurrences especially in the face of project implementation. The study also recommends furthers studies to be conducted on others counties to compare the results
2020-10-29T09:03:16.583Z
2020-10-25T00:00:00.000
{ "year": 2020, "sha1": "5b5b8f7d5bbdb90731e9508d2a2a07c9ed08fef0", "oa_license": "CCBYNC", "oa_url": "https://ejbmr.org/index.php/ejbmr/article/download/530/327", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "21972a0d3fd38223afed2c504d194cc9d6f5a082", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Geography" ] }
204849001
pes2o/s2orc
v3-fos-license
Live-cell imaging of single mRNA dynamics using split superfolder green fluorescent proteins with minimal background The MS2 system, with an MS2 binding site (MBS) and an MS2 coat protein fused to a fluorescent protein (MCP–FP), has been widely used to fluorescently label mRNA in live cells. However, one of its limitations is the constant background fluorescence signal generated from free MCP–FPs. To overcome this obstacle, we used a superfolder GFP (sfGFP) split into two or three nonfluorescent fragments that reassemble and emit fluorescence only when bound to the target mRNA. Using the high-affinity interactions of bacteriophage coat proteins with their corresponding RNA binding motifs, we showed that the nonfluorescent sfGFP fragments were successfully brought close to each other to reconstitute a complete sfGFP. Furthermore, real-time mRNA dynamics inside the nucleus as well as the cytoplasm were observed by using the split sfGFPs with the MS2–PP7 hybrid system. Our results demonstrate that the split sfGFP systems are useful tools for background-free imaging of mRNA with high spatiotemporal resolution. INTRODUCTION Fluorescent proteins (FPs) fused to RNA binding proteins (RBPs) have been widely used for visualizing the transport and dynamics of RNA in living cells (Tyagi 2009;Rath and Rentmeister 2015). In particular, the MS2 system using the high affinity interaction between the MS2 coat protein (MCP) and the MS2 binding site (MBS) (Bertrand et al. 1998;Beach et al. 1999) has been extensively adopted for tagging a specific RNA, enabling the study of RNA localization and dynamics in many different organisms, from bacteria to mammalian cells and tissues (Bertrand et al. 1998;Rook et al. 2000;Forrest and Gavis 2003;Fusco et al. 2003;Golding and Cox 2004;Chubb et al. 2006;Lionnet et al. 2011;Park et al. 2014). To distinguish the RNA tagged with the MCP fused with a fluorescent protein (MCP-FP) from the constant background of unbound MCP-FPs, two strategies have been utilized: (i) introducing multiple repeats of the MBS motif into the target RNA, and (ii) attaching a nuclear localization sequence (NLS) to the MCP-FP to accumulate the unbound MCP-FPs in the nucleus to reduce the background in the cytoplasm (Fusco et al. 2003). By applying this approach, one can label a target RNA with multiple FPs and obtain a high signal-tobackground ratio for single RNA tracking in the cytoplasm. However, this method has limitations in tracking single RNA in the nucleus and in quantifying RNA expression levels by deep tissue or whole animal imaging that provides relatively low spatial resolution. Recently, a split FP approach, which was developed to study protein-protein interactions (Kerppola 2006), was adopted to eliminate the background signal in RNA imaging. In the bimolecular fluorescence complementation (BiFC) assay, nonfluorescent FP fragments reconstitute a complete fluorescent protein when they are brought into close proximity. Using two different RBPs conjugated to split FP fragments, several research groups have demonstrated BiFC-based RNA imaging (Rackham and Brown 2004;Ozawa et al. 2007;Valencia-Burton et al. 2007;Yamada et al. 2011;Yiu et al. 2011;Wu et al. 2014). For example, the PP7 system consisting of the PP7 coat protein (PCP) and the PP7 binding site (PBS) can be used in conjunction with the MS2 system for BiFC-based RNA imaging (Wu et al. 2014). In this MS2-PP7 hybrid system, a target RNA was tagged with an alternating tandem array of MS2 and PP7 binding sites (12 × MBS-PBS). MCP and PCP were fused with the N-and C-fragments of the yellow fluorescent protein Venus, respectively. Because the split Venus fragments form a complete FP when the MCP and PCP are bound to the 12 × MBS-PBS-tagged RNA, this approach enables background-free RNA imaging at a singlemolecule level. However, a limitation of the BiFC-based approach is that there is a time delay between the production of the target RNA and the generation of the fluorescence signal. Because the folding and maturation of FPs require a substantial amount of time ranging from several minutes to a few hours, BiFC-based RNA imaging techniques have been considered to be suitable only for imaging long-lived RNAs (Xia et al. 2017). Moreover, the time delay for the BiFC signal hampers the detection of nuclear RNA and nascent RNA being made at transcription sites. Here, we report a real-time background-free imaging method using split superfolder GFPs (sfGFPs) for a nextgeneration RNA probe in living cells. Taking advantage of the properties of sfGFPs, such as improved folding kinetics (Pédelacq et al. 2006;Andrews et al. 2007) and a relatively fast maturation rate (Pédelacq et al. 2006;Iizuka et al. 2011;Khmelinskii et al. 2012;Balleza et al. 2018), we have extended the utility of split FP-based single RNA imaging in live cells. Two variants of the split sfGFP, which were developed by directed evolution of two or three nonfluorescent fragments of sfGFP (Cabantous et al. 2005(Cabantous et al. , 2013, are fused to MCP and PCP. The socalled bipartite and tripartite split GFPs fused to capsid proteins can successfully bind to the 12 × MBS-PBStagged RNA and thus are brought together to reconstitute a complete mature sfGFP. We report that the tripartite sfGFP is particularly suitable for observing mRNA dynamics not only in the cytoplasm but also in the nucleus at a single-molecule resolution. Our results show that the sfGFPbased fluorescence complementation (FC) method is a powerful tool for genetically encoded RNA imaging, providing opportunities for investigators to observe diverse RNA dynamics with high spatiotemporal resolution. Bipartite sfGFP with the MBS-PBS system A schematic diagram shows the design of the bipartite sfGFP system for low-background RNA tagging in living cells (Fig. 1A). In this system, coat proteins fused with bipartite sfGFPs bind to their corresponding RNA motifs, restoring a fully fluorescent sfGFP. GFP1-10 and GFP11, which were generated by splitting sfGFP between the 10th and the 11th β-strands (Cabantous et al. 2005), were fused to MCP and PCP, respectively ( Fig. 1B; Supplemental Table S1). A tandem array of 12 × MBS-PBS was inserted into the 3 ′ untranslated region (3 ′ UTR) of the reporter mRNA that encoded tagRFP657 ( Fig. 1B; Supplemental Table S2). To increase the coexpression efficiency, MCP-GFP1-10 and PCP-GFP11 were combined into a polycistronic plasmid using a P2A sequence. An NLS peptide was attached to the coat proteins to accumulate the proteins in the nucleus for immediate tagging of nascent mRNAs. The coat protein construct and the reporter mRNA were then coexpressed in U2OS cells using lentiviral transfection. Double-positive cells were sorted by FACS and used for live-cell imaging. The efficient cleavage of the P2A peptide was verified by western blot (Supplemental Fig. S1). Reporter mRNA particles labeled with the bipartite sfGFP were visible in the cytoplasm with little background signal in the nucleus (Fig. 1C) compared to the high background in the nucleus when using the traditional, intact MS2-GFP system (Supplemental Fig. S2). To assess nonspecific signal from the random collision of two split sfGFP fragments, we compared cells expressing the bipartite sfGFP system in the absence and presence of the reporter mRNA (Fig. 1D,E). We observed minimal nonspecific signal in the negative control (Fig. 1D), confirming that the observed GFP signal in Figure 1E was due to complementation of the sfGFP fragments on the reporter mRNAs. Next, we compared the bipartite sfGFP system with the previously reported bipartite Venus system (Wu et al. 2014). Cells were imaged with the same excitation power (∼132 mW/cm 2 ) through the corresponding filter cube for each system. The mRNA intensities were measured from the images by using TrackNTrace software (Stein and Thiart 2016). Overall, the fluorescence intensity of the mRNAs labeled with the bipartite sfGFP system was similar to that of the mRNAs labeled with the split Venus system (Supplemental Fig. S3). Tripartite sfGFP with the MBS-PBS system We then adopted a tripartite complementation system to improve the signal-to-noise ratio of the single RNA imaging ( Fig. 2A). Recently, a tripartite sfGFP system was developed to enhance the rate of fluorescence generation and to reduce the self-assembly background by using multiple rounds of directed evolution (Cabantous et al. 2013). This tripartite sfGFP system consists of two small fragments, GFP10 (residues 194-212) and GFP11 (residues 213-233), and a large GFP1-9 fragment (residues 1-193) (Cabantous et al. 2013). GFP10 and GFP11 were fused to MCP and PCP, respectively. GFP1-9, MCP-GFP10, and PCP-GFP11 were combined in a single vector using two P2A sequences ( Fig. 2B; Supplemental Table S3), and were coexpressed along with the reporter mRNA in U2OS cells (Fig. 2C). Again, the coat protein construct alone did not generate GFP signal (Fig. 2D). Only when both the coat protein construct and the reporter mRNA were coexpressed did we observe diffraction-limited spots with strong GFP signal in the nucleus and cytoplasm (Fig. 2E). This result indicates the efficient combination of all three fragments (GFP10, GFP11, and GFP1-9) only in the presence of the reporter mRNA. To test whether the tripartite sfGFP system hinders the degradation of the reporter mRNA, we inhibited transcription with 100 µM 5,6-dichloro-1-β-D-ribofuranosylbenzimidazole (DRB) and imaged the same cells at different time points (Supplemental Fig. S4A). The number of sfGFPtagged mRNAs per cell significantly decreased at 1.5 and 5.5 h after the treatment (Supplemental Fig. S4B, P < 0.05; Student's t-test). We did not observe any noticeable accumulation of tagged mRNA decay fragments that had been previously reported in yeast Parker 2015, 2016;Heinrich et al. 2017). The average sfGFP background level inside a cell was similar at 0.5, 1.5, and 5.5 h after the treatment (Supplemental Fig. S4C). Although the association of tripartite sfGFP is known to be irreversible (Cabantous et al. 2013), we did not observe a noticeable increase in the background over several passages of the cells after lentiviral transfection. Comparison of the bipartite and tripartite sfGFP systems for RNA imaging To compare the brightness of the reporter mRNAs labeled with the bipartite and tripartite sfGFP constructs, we measured the fluorescence amplitude of the particles under the same imaging conditions (Fig. 3A,B). Using TrackNTrace software (Stein and Thiart 2016), we obtained the trajectory and the fluorescence intensity of mRNAs from the time-lapse images taken at a 10 Hz frame rate. Figure 3C shows the overall distribution of the fluorescence intensity of mRNAs detected in a cell transfected with either the bipartite or tripartite system. The mRNAs labeled with the tripartite sfGFP system had a higher mean fluorescence intensity than those labeled with the bipartite sfGFP system. The average fluorescence intensity (Fig. 3D) and the average number of detected mRNA trajectories per cell (Fig. 3E) were also higher in the tripartite than in the bipartite system (P < 0.005 by Student's two-tailed t-test, n = 17 cells for each system). These results indicate that the tripartite sfGFP system provides a higher signal-to-noise ratio than the bipartite system. To confirm that the detected particles were indeed single mRNA molecules, we performed single-molecule fluorescence in situ hybridization (smFISH) for the reporter mRNA (Supplemental Fig. S5). We designed three smFISH probes targeting the MS2-PP7 stem-loop linker sequences, which had a total of 36 binding sites for a single mRNA (Supplemental Table S4). Both the bipartite (Supplemental Fig. S5A) and tripartite (Supplemental Fig. S5B) sfGFP systems showed good overlap with the smFISH signal. Although fixation and smFISH procedures caused a decrease of GFP fluorescence, we were able to detect single mRNA molecules labeled with both smFISH probes and sfGFPs. The detection efficiencies of the bipartite and tripartite sfGFP systems after smFISH (Supplemental Fig. S5C) were estimated by using a previously described analysis method (Horvathova et al. 2017). As expected, the detection efficiency of the tripartite system (58 ± 4%) was much higher than that of the bipartite system (39 ± 6%). Because the fluorescent intensity of sfGFP is higher in live cells, the actual detection efficiencies of the split systems in live cells would be higher than the values presented in Supplemental Figure S5C. Dynamics of single mRNAs labeled with the tripartite sfGFP system Previous BiFC-based RNA imaging tools visualized only cytoplasmic mRNAs due to the slow response time of the split system. Because the nuclear export of mRNA occurs within 5-40 min after transcription (Mor et al. 2010), folding and maturation of the split proteins should be completed within this time range to visualize nuclear mRNAs. We found that the tripartite sfGFP system enabled single-molecule imaging of mRNAs in the nucleus (inside the blue dashed line in Fig. 4A), as well as those in the cytoplasm (the area between the red and the blue dashed lines in Fig. 4A). To compare the mobility of mRNA in the nucleus and the cytoplasm (Supplemental Movie S1), we tracked single mRNA particles and plotted their trajectories (Fig. 4A, right). We collected 3629 trajectories in the cytoplasm from 11 cells and 713 trajectories in the nuclei from 13 cells. The ensembleaveraged mean square displacement (EAMSD) of mRNA was calculated using a previously described method (Song et al. 2018). The EAMSD curves of mRNA in the nucleus (blue) and the cytoplasm (red) are plotted in Figure 4B. The diffusion coefficient of mRNA was higher in the cytoplasm (0.10 µm 2 /sec) than in the nucleus (0.02 µm 2 /sec), which was consistent with the result in a previous report (Mor et al. 2010). Furthermore, we were able to observe strong fluorescence signals from 1-2 loci inside the nuclei, which presumably indicate transcription sites (white arrows in Fig. 4C; Supplemental Movie S2). Our results suggest that the tripartite sfGFP with the MBS-PBS system enables background-free imaging of not only cytoplasmic but also nuclear mRNA dynamics. DISCUSSION In this report, we have demonstrated that split sfGFP with the MBS-PBS system is a powerful tool for imaging single mRNA dynamics with minimal background. The overall performance of the bipartite sfGFP system is similar to that of the previously reported split Venus system for imaging single mRNAs (Wu et al. 2014). By adapting the tripartite sfGFP (Cabantous et al. 2013), we have significantly improved the signal-to-background ratio and enabled single mRNA imaging in the nucleus as well as in the cytoplasm in living cells. Previously, some limitations of BiFC-based RNA imaging have been reported such as (i) background from the spontaneous assembly of split FP, (ii) slow maturation of FPs, and (iii) irreversible association of the FP fragments (Ozawa et al. 2007;Kerppola 2009;Xia et al. 2017). Because spontaneous assembly of three components is much less likely to occur than of two components, the background fluorescence can be further suppressed by using the tripartite sfGFP system. Moreover, the synergistic effect of the relatively short maturation time of sfGFP (Pédelacq et al. 2006;Iizuka et al. 2011;Khmelinskii et al. 2012;Balleza et al. 2018) and the improved folding and complementation efficiency of the tripartite sfGFP system (Cabantous et al. 2013) allowed visualization of transcription sites and single mRNAs in the nucleus. The tripartite sfGFP used in our experiment has a different amino acid sequence from that of the bipartite sfGFP. Therefore, we cannot simply attribute the better performance of the tripartite sfGFP system to the difference between binary and ternary interactions. It is possible that other improved bipartite split FP systems (Huang et al. 2015;Feng et al. 2017;Köker et al. 2018) may perform as well as the tripartite sfGFP system tested in this report. Because the association of split sfGFP is known to be irreversible (Cabantous et al. 2013), we investigated whether the reconstituted sfGFP hinders mRNA degradation or increases the background fluorescence over time. The number of sfGFP-labeled mRNAs significantly decreased at 1.5 and 5.5 h after inhibition of transcription by DRB treatment. And we did not observe any significant accumulation of sfGFP-tagged mRNA decay fragments. While there have been some concerns about the possible accumulation of MS2-labeled mRNA decay fragments in bacteria (Golding and Cox 2004) and budding yeast Parker 2015, 2016;Haimovich et al. 2016;Heinrich et al. 2017), recent studies have reported that such degradation artifacts are not observed using the traditional MS2-GFP system in mammalian cells and tissues (Horvathova et al. 2017;Tutucci et al. 2018;Kim et al. 2019). Because the average half-life of mRNA in mammalian cells (several hours) is much longer than in bacteria (∼5 min) and yeast (∼23 min), MS2 and PP7 RNA labeling methods may be less prone to the degradation artifact in higher organisms (Das et al. 2018;Tutucci et al. 2018;Kim et al. 2019). In addition, we did not find a significant increase in the background fluorescence after the reporter mRNAs were degraded. The accumulation rate of background fluorescence depends on several factors such as the lifetime and expression level of the tagged mRNA, the lifetime of reconstituted sfGFP, and the cell division rate. We empirically found that the irreversibility of split sfGFP did not hamper single-mRNA imaging in this study. We anticipate that the FC-based RNA imaging technology will have a great potential for intravital imaging of RNA because of the minimal background noise. An optimal candidate for intravital imaging is a red-shifted split fluorescent protein (Chu et al. 2009;Filonov and Verkhusha 2013;Han et al. 2014;Chen et al. 2015) due to less absorbance and light scattering in tissue. In addition to split fluorescent proteins, there are various two-hybrid systems for in vivo imaging modalities, such as bioluminescence and positron emission tomography (PET) (Shekhawat and Ghosh 2011). For instance, Gambhir and coworkers engineered a red light-emitting bioluminescence resonance energy transfer (BRET) system (Dragulescu-Andrasi et al. 2011) and a PET-based split reporter system using herpes simplex virus type 1 thymidine kinase (HSV1-TK) (Massoud Cloning and plasmid construction All plasmids for the split sfGFPs with the MBS-PBS hybrid system were constructed in lentiviral vectors. To generate the reporter mRNA construct with the 12 × MBS-PBS, we replaced CFP in the phage-CMV-CFP-12 × MBS-PBS plasmid (gift from Dr. Robert H. Singer) with tagRFP657. To generate the polycistronic vectors with the bipartite and tripartite sfGFPs, we amplified nls-ha-MCP and nls-ha-PCP by polymerase chain reaction (PCR) from the ubc-nls-ha-MCP-VenusN-nls-ha-PCP-VenusC plasmid (Addgene plasmid #52985), synthesized the sfGFP fragments, and inserted them into the pCCLsin.PPT.UbiC.GFP lentiviral backbone (Follenzi and Naldini 2002). The amino acid and DNA sequences are provided in Supplemental Tables 1-3. Lentivirus production and transfection Lentiviral vectors for the coat proteins were prepared by cotransfecting 293T cells with the third-generation packaging constructs (pMDLg/pRRE, pRSV-REV, and pMD2.VSVG) and the transfer vector by calcium phosphate precipitation. The culture media was replaced with highglucose Dulbecco's modified Eagle medium (DMEM, Thermo Fisher Scientific) supplemented with 10% fetal bovine serum (FBS), 1% GlutaMAX (Thermo Fisher Scientific), and 0.25% penicillin-streptomycin (PS, Thermo Fisher Scientific) at 12-16 h after transfection. After 24 and 48 h, the supernatant was collected and filtered with a 0.22 µm syringe filter. The lentiviral vector for the reporter mRNA was prepared similar to a previously described method (Mostoslavsky et al. 2005). Briefly, the medium was changed to DMEM, 10% FBS, and 1% GlutaMAX (without antibiotics) at least 1 h before transfection. Next, 293T cells were transfected with Gag-Pol, Rev, Tat, VSVG, and the transfer vector by using Fugene HD transfection reagent (Promega). The culture media was replaced with DMEM supplemented with 10% FBS, 1% GlutaMAX, and 1% PS at 12-16 h after transfection. After 36 h, the supernatant was collected and filtered with a 0.22 µm syringe filter. The viral production and titration were confirmed (>5 × 10 5 IFU/mL) with the Lenti-X GoStix kit (Clontech). The human osteosarcoma U2OS cell line was purchased from the Korean Cell Line Bank and grown in DMEM with 10% FBS, 1% GlutaMAX, and 1% PS. For lentiviral transfection, the LV pellet was resuspended in DMEM with polybrene (6 µg/mL, Sigma) and added to the U2OS cells seeded in a six-well plate (1 × 10 5 per well). The infected U2OS cells were sorted with a FACS Aria II (BD Biosciences). The positive cells expressing both GFP and tagRFP657 were collected and used for live-cell imaging. Imaging and tracking single mRNAs in live cells To perform live-cell imaging, we removed the growth medium from the cell cultures and replaced it with imaging media, which was phenol-red free Leibovitz's L-15 medium (Thermo Fisher Scientific) containing 10% FBS, 1% GlutaMAX, and 1% PS. Wide-field fluorescence images were taken using an Olympus IX73 inverted microscope equipped with a U Apochromat 150× 1.45 NA objective (Olympus), two iXon Ultra 897 electron-multiplying charge-coupled device (EMCCD) cameras (Andor), an MS-2000 XYZ automated stage (ASI), and a Chamlide TC top-A C B stage incubator system (Live Cell Instrument). 488 nm and 561 nm diode lasers (Cobolt) were used to excite the GFP and tagRFP657, respectively. The fluorescence emission was filtered with 525/50 and 605/52 bandpass filters (TRF89902-EM, Chroma). Time-lapse images for RNA tracking were taken at 20 frames per second (fps) with a 50 msec exposure time using Micro-Manager software. Tracking of a single mRNA particle was performed with TrackNTrace software (Stein and Thiart 2016). The first six points of the EAMSD curves were fitted to obtain the diffusion coefficients. For transcription inhibition experiments, cells were imaged after treatment with 100 µM DRB (Sigma D1916). Z-section images of cells were taken at 0.5, 1.5, and 5.5 h after DRB treatment. After the maximum projection of z-stack images, mRNAs were detected by the TrackNTrace software (Stein and Thiart 2016). The background level was determined by obtaining the median pixel intensity in the cytoplasm. To compare the previously reported split Venus-based reporter system (Wu et al. 2014) and our bipartite system, all of the images were taken in the same imaging conditions. As Venus and GFP have different fluorescence spectra, we measured the fluorescence intensity of the RNA particles under the same LED power density at 132 ± 2 mW/cm 2 measured with the corresponding excitation filter. The mRNAs were again detected by automated particle detecting software (Stein and Thiart 2016) and analyzed. Western blot Proteins (14 µg) obtained from lentivirus-infected cell lines were separated on 4%-12% Bis-Tris polyacrylamide precast gels in MES-SDS running buffer in a reducing condition and transferred to nitrocellulose membranes by a Mini Blot Module (Thermo Scientific). Anti-GFP (1:1000, A6455, Thermo Scientific) and anti-GAPDH (1:20000 G9545, Sigma) were used as primary antibodies, and anti-rabbit IgG conjugated to HRP (1:5000, SA002-500, GenDEPOT) was used as secondary antibody. Pierce ECL western blotting substrate (Thermo Scientific) was used for HRP detection. The western blots were imaged by a LAS 4000 (GE Healthcare Life Sciences). Single-molecule fluorescence in situ hybridization (smFISH) and colocalization analysis Cells were fixed with 4% paraformaldehyde (PFA) in phosphatebuffered saline (PBS). After permeabilization with 0.1% Triton X-100 in PBS for 10 min at room temperature (RT), the cells were prehybridized with 10% formamide in 2× SSC for 30 min at RT. Hybridization was performed at 37°C for 3 h using hybridization buffer (0.1 µM 20-mer DNA probes [Supplemental Table 4], 2× SSC, 10% formamide, 10% dextran sulfate, 2 mg/mL bovine serum albumin [BSA], 0.025 mg/mL Escherichia coli transfer RNA, and 0.025 mg/mL sheared salmon sperm DNA in ribonuclease [RNase]-free water). The cells were then washed twice with warm 10% formamide in 2× SSC for 20 min, followed by multiple washings with 2× SSC and DAPI staining. For colocalization analysis, the cells were imaged in PBS using an Olympus IX73 inverted microscope equipped with a U Apochromat 150× 1.45 NA objective (Olympus), an iXon Ultra 897 EMCCD camera (Andor), a SOLA SE light-emitting diode (Lumencor), an EGFP filter set (Chroma, 49002) and a Cy3/TRITC filter set (Chroma, 49004). After registration of the two-color images, particles were detected with the TrackNTrace software (Stein and Thiart 2016). If the distance between two particles in two different channels were shorter than 300 nm, it was counted as colocalization. The detection efficiencies of the split systems were calculated by using the method described by Horvathova et al. (2017). SUPPLEMENTAL MATERIAL Supplemental material is available for this article.
2019-10-24T09:07:54.900Z
2019-10-22T00:00:00.000
{ "year": 2020, "sha1": "4e7ac52d99d7f3997f92990918dca5e071708ff9", "oa_license": null, "oa_url": "http://rnajournal.cshlp.org/content/26/1/101.full.pdf", "oa_status": "BRONZE", "pdf_src": "ScienceParsePlus", "pdf_hash": "7581d596169a997bfc787396f6b1d227dab89e6c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
17662072
pes2o/s2orc
v3-fos-license
Validation of the Target Protein of Insecticidal Dihydroagarofuran Sesquiterpene Polyesters A series of insecticidal dihydroagarofuran sesquiterpene polyesters were isolated from the root bark of Chinese bittersweet (Celastrus angulatus Max). A previous study indicated that these compounds affect the digestive system of insects, and aminopeptidase N3 and V-ATPase have been identified as the most putative target proteins by affinity chromatography. In this study, the correlation between the affinity of the compounds to subunit H and the insecticidal activity or inhibitory effect on the activity of V-ATPase was analyzed to validate the target protein. Results indicated that the subunit H of V-ATPase was the target protein of the insecticidal compounds. In addition, the possible mechanism of action of the compounds was discussed. The results provide new ideas for developing pesticides acting on V-ATPase of insects. Introduction Identification of target proteins is the basis for development of new pesticides. The discovery of novel targets may result in a series of new pesticides. Moreover, in the development of pesticides, natural products are useful probes in providing new targets. A common example is nicotine; the identification and elucidation of the molecular structure of the target protein of nicotine (i.e., insect nAChR) promoted the development of neonicotinoid pesticides [1][2][3]. A series of insecticidal compounds, namely, sesquiterpene polyesters, sharing a dihydro-b-agarofuran sesquiterpenoid skeleton were isolated and characterized by Wu et al. from the root bark of Celastrus angulatus Max (Celastraceae) [4][5][6]. These insecticidal compounds mainly affect the digestive system of pests, presenting a series of symptoms, such as excitement, twitching, emesis, and loss of body fluid after oral administration [7,8]. Transmission electron microscopy (TEM) analysis showed that the midgut epithelial cells of Mythimna separata Walker larvae that ingested celangulin V (CV) were damaged, showing visible vacuolization of cytoplasm, serious disruption of microvilli, fragmentation of rough endoplasmic reticulum cisternae, and rupture of plasma membrane. Subsequently, these morphological changes induce leakage of cytoplasm contents into the midgut lumen, resulting in appearance of numerous lysosome-like vacuoles and secretion [8,9]. However, the mechanisms of action and the target protein of the dihydroagarofuran sesquiterpene polyesters remain unknown. In our previous study, 11 binding proteins were isolated by affinity chromatography using a derivative of CV (one of the insecticidal compounds) as ligand [10]. Considering the functions of these proteins and the symptoms caused by these compounds, we speculated that V-ATPase and aminopeptidase N (APN)-3 are the putative target proteins. In Table 1. Insecticidal activity of dihydroagarofuran sesquiterpene polyesters against the fifth instar larvae of M. separate. Effects on the Activity of APN Our previous study indicated that the symptoms caused by dihydroagarofuran sesquiterpene polyesters are similar to those of Bt toxin; APN is the receptor of Bt toxin [12][13][14]. Here, we chose CV as the representative compound to measure the effect on APN activity. The results showed that APN activity of the group treated with CV has no significant difference with that of the group treated with DMSO. Thus, CV had no effect on the activity of APN (Figure 1). Toxins 2016, 8,79 2 of 11 speculated that V-ATPase and aminopeptidase N (APN)-3 are the putative target proteins. In the present study, we measured the insecticidal toxicity and the enzyme-inhibiting activity of the 12 dihydroagarofuran sesquiterpene polyesters against M. separata larvae. The target protein was then validated based on correlation analysis. Results showed that the subunit H of V-ATPase is the target protein of the dihydroagarofuran sesquiterpene polyesters. As with Tulipaline A, as one compound of lactones and aromatic aldehydes that could be exploited as novel nematicdes through inhibiting the activity of V-ATPase [11], this finding also provides ideas for the development of novel pesticides. Insecticidal Activity For the subsequent correlation analysis, the insecticidal activity of 12 dihydroagarofuran sesquiterpene polyesters against M. separata larvae was evaluated. Results showed that CV-6-N-methylisatoic, CV-6-isobutyric acid ester, CV-6-ketone, NW62, and NW57 did not have insecticidal activity at a dose of 668.45 μg/g, whereas CV, CV-6-α-aminopropanoic acid ester, CV-6-aminoacetic acid ester, wilforine, NW69, NW03, and NW70 were all toxic to the fifth instar larvae. The LD50 of the insecticidal compounds was then measured (Table 1). Among the seven insecticidal compounds, the LD50 value of wilforine was the lowest. CV-6-α-aminopropanoic acid ester and NW70 also had high toxicity with LD50 of 33.605 and 86.271 μg/g, respectively, whereas CV and CV-6-aminoacetic acid ester had relatively lower insecticidal activity. Our previous study indicated that the symptoms caused by dihydroagarofuran sesquiterpene polyesters are similar to those of Bt toxin; APN is the receptor of Bt toxin [12][13][14]. Here, we chose CV as the representative compound to measure the effect on APN activity. The results showed that APN activity of the group treated with CV has no significant difference with that of the group treated with DMSO. Thus, CV had no effect on the activity of APN (Figure 1). Effects on the Activity of V-ATPase The results of the effects of the 12 dihydroagarofuran sesquiterpene polyesters on the V-ATPase activity of M. separata larvae are shown in Table 2. Table 2 shows that the positive control, bafilomycin A1, has an inhibition rate of 48.29% at a dose of 3 µM. Among the dihydroagarofuran sesquiterpene polyesters, wilforine displays high inhibitory effect against V-ATPase, with an inhibition rate of 54.78% at a concentration of 100 µM. CV, CV-6-aminoacetic acid ester, CV-6-α-aminopropanoic acid ester, NW03, NW69, NW70, CV-6-N-methylisatoic, and CV-6-isobutyric acid ester showed relatively lower inhibition rate. Compounds NW57, NW62, and CV-6-ketone barely had any effect on V-ATPase. Comparison of Tables 1 and 2 shows that, in general, the tested dihydroagarofuran sesquiterpene polyesters that had high insecticidal activity also had high inhibitory effect on V-ATPase. Moreover, correlation analysis demonstrated that the Pearson correlation coefficient between the LD 50 and the probit value of the inhibition rate was´0.816 ,which was significant at 0.05 significance level (two-tailed) (p = 0.025; Figure 2). Effects on the Activity of V-ATPase The results of the effects of the 12 dihydroagarofuran sesquiterpene polyesters on the V-ATPase activity of M. separata larvae are shown in Table 2. Table 2 shows that the positive control, bafilomycin A1, has an inhibition rate of 48.29% at a dose of 3 μM. Among the dihydroagarofuran sesquiterpene polyesters, wilforine displays high inhibitory effect against V-ATPase, with an inhibition rate of 54.78% at a concentration of 100 μM. CV, CV-6-aminoacetic acid ester, CV-6-α-aminopropanoic acid ester, NW03, NW69, NW70, CV-6-N-methylisatoic, and CV-6-isobutyric acid ester showed relatively lower inhibition rate. Compounds NW57, NW62, and CV-6-ketone barely had any effect on V-ATPase. Comparison of Tables 1 and 2 shows that, in general, the tested dihydroagarofuran sesquiterpene polyesters that had high insecticidal activity also had high inhibitory effect on V-ATPase. Moreover, correlation analysis demonstrated that the Pearson correlation coefficient between the LD50 and the probit value of the inhibition rate was −0.816 ,which was significant at 0.05 significance level (two-tailed) (p = 0.025; Figure 2). Interaction between Subunit H and Dihydroagarofuran Sesquiterpene Polyesters From the results of V-ATPase assay and the correlation analysis, we could roughly conclude that V-ATPase was the target protein of dihydroagarofuran sesquiterpene polyesters. However, the proteins separated by affinity chromatography include subunit a, B, and H of V-ATPase. Subunit H of V-ATPase is essential for the catalysis but not for the assembly of the enzyme [15][16][17]. Furthermore, it acts as an inhibitor of ATP hydrolysis in the free V 1 complex [16], and is probably the binding site for other proteins which interact with V-ATPase [17,18]. Owing to the important function of subunit H, we firstly cloned, expressed, and purified subunit H to study the interaction between subunit H and the 12 dihydroagarofuran sesquiterpene polyesters. Expression and Purification of Subunit H of V-ATPase Subunit H was expressed at 18˝C after inducing by isopropyl-β-D-thiogalactoside (IPTG) for 18 h. After twice purification by using Ni-NTA, subunit H (~55 kDa) was obtained and concentrated using an ultra membrane ( Figure 3). Figure 3 shows that the purity of recombinant subunit H can meet the requirements for interaction analysis. Interaction between Subunit H and Dihydroagarofuran Sesquiterpene Polyesters From the results of V-ATPase assay and the correlation analysis, we could roughly conclude that V-ATPase was the target protein of dihydroagarofuran sesquiterpene polyesters. However, the proteins separated by affinity chromatography include subunit a, B, and H of V-ATPase. Subunit H of V-ATPase is essential for the catalysis but not for the assembly of the enzyme [15][16][17]. Furthermore, it acts as an inhibitor of ATP hydrolysis in the free V1 complex [16], and is probably the binding site for other proteins which interact with V-ATPase [17,18]. Owing to the important function of subunit H, we firstly cloned, expressed, and purified subunit H to study the interaction between subunit H and the 12 dihydroagarofuran sesquiterpene polyesters. Expression and Purification of Subunit H of V-ATPase Subunit H was expressed at 18 °C after inducing by isopropyl-β-D-thiogalactoside (IPTG) for 18 h. After twice purification by using Ni-NTA, subunit H (~55 kDa) was obtained and concentrated using an ultra membrane ( Figure 3). Figure 3 shows that the purity of recombinant subunit H can meet the requirements for interaction analysis. Interaction between Subunit H and Small Molecules After purifying the subunit H, the interaction between subunit H and dihydroagarofuran sesquiterpene polyesters was evaluated by microscale thermophoresis (MST, Monolith NT.115). The results indicated that four compounds cannot bind with the recombinant subunit H, whereas the others can bind with subunit H; KD values were obtained through the binding curve (Table 3). no-binding - Interaction between Subunit H and Small Molecules After purifying the subunit H, the interaction between subunit H and dihydroagarofuran sesquiterpene polyesters was evaluated by microscale thermophoresis (MST, Monolith NT.115). The results indicated that four compounds cannot bind with the recombinant subunit H, whereas the others can bind with subunit H; K D values were obtained through the binding curve (Table 3). Table 3. Nteraction between dihydroagarofuran sesquiterpene polyesters and subunit H of M. separata. Samples Interaction no-binding -NW62 no-binding -Based on the data in Tables 1-3 the correlation of K D and insecticidal toxicity, as well as the inhibition rate of V-ATPase activity, were analyzed. First, the logarithm value of LD 50 and K D can be fitted in the regression curve y = 53.174x + 47.634. The Pearson correlation coefficient was 0.870, and the p-value was 0.011 (two-tailed), which is significant at 0.05 significance level (Figure 4a). Second, the corresponding probit values of inhibition rate of V-ATPase and K D value also were also correlated by fitting in the regression equation of y =´95.912x + 580.47. The Pearson correlation coefficient waś 0.730, which is significant at a 0.05 significance level (p = 0.04) (Figure 4b). Based on the data in Tables 1-3, the correlation of KD and insecticidal toxicity, as well as the inhibition rate of V-ATPase activity, were analyzed. First, the logarithm value of LD50 and KD can be fitted in the regression curve y = 53.174x + 47.634. The Pearson correlation coefficient was 0.870, and the p-value was 0.011 (two-tailed), which is significant at 0.05 significance level (Figure 4a). Second, the corresponding probit values of inhibition rate of V-ATPase and KD value also were also correlated by fitting in the regression equation of y = −95.912x + 580.47. The Pearson correlation coefficient was −0.730, which is significant at a 0.05 significance level (p = 0.04) (Figure 4b). CV-6-N-methylisatoic and CV-6-isobutyric acid ester were exceptions among all these compounds. CV-6-N-methylisatoic did not show insecticidal activity, but it can inhibit the activity of V-ATPase, with an inhibition rate of 24.04% and bind with subunit H with a KD value of 213 μM. Whereas, CV-6-isobutyric acid ester did not show toxicity to M. separata larvae and cannot bind with subunit H, but it showed low inhibitory activity of V-ATPase. The possible reason is that the insecticidal activity was conducted in vivo, where penetration, storage, and degradation by detoxification enzymes occur, thereby inhibiting the insecticidal activity. By contrast, the experiments of V-ATPase activity and interaction with subunit H were conducted in vitro, where the compounds form direct contact with proteins. Except for wilforine, the other dihydroagarofuran sesquiterpene polyesters had lower inhibition rate against V-ATPase from midgut of M. separata larvae. One of the possible reasons is that the insecticidal compounds are polyesters, which have poor solubility in aqueous solution. Owing to the difficulty of increasing the solubility, these compounds showed lower inhibition of V-ATPase. In addition, the low solubility of dihydroagarofuran sesquiterpene polyesters may lead to inaccuracy of measurement of KD. Discussion Correlation analysis is one of the methods used for target validation. The goal of correlation analysis is to analyze the correlation between the different affinities for target proteins after getting a series analog of small molecules of interest and the potency of causing the corresponding phenotype by these molecules. The keys in applying this method is to synthesize analogs of the bioactive small molecule, which has a wide range of activity (IC50 or LD50 values) spanning at least three orders of magnitude and to make the values more or less equally distributed over those three orders of magnitude of activity instead of being concentrated around one particular value [19]. In general, synthetic bioactive molecules can easily meet the above requirements; by contrast, obtaining sufficient numbers of natural bioactive products for analysis is difficult. In this study, the insecticidal activity (LD50 value) of the dihydroagarofuran sesquiterpene polyesters vary in three orders of magnitude, but the number is insufficient. CV-6-N-methylisatoic and CV-6-isobutyric acid ester were exceptions among all these compounds. CV-6-N-methylisatoic did not show insecticidal activity, but it can inhibit the activity of V-ATPase, with an inhibition rate of 24.04% and bind with subunit H with a K D value of 213 µM. Whereas, CV-6-isobutyric acid ester did not show toxicity to M. separata larvae and cannot bind with subunit H, but it showed low inhibitory activity of V-ATPase. The possible reason is that the insecticidal activity was conducted in vivo, where penetration, storage, and degradation by detoxification enzymes occur, thereby inhibiting the insecticidal activity. By contrast, the experiments of V-ATPase activity and interaction with subunit H were conducted in vitro, where the compounds form direct contact with proteins. Except for wilforine, the other dihydroagarofuran sesquiterpene polyesters had lower inhibition rate against V-ATPase from midgut of M. separata larvae. One of the possible reasons is that the insecticidal compounds are polyesters, which have poor solubility in aqueous solution. Owing to the difficulty of increasing the solubility, these compounds showed lower inhibition of V-ATPase. In addition, the low solubility of dihydroagarofuran sesquiterpene polyesters may lead to inaccuracy of measurement of K D . Discussion Correlation analysis is one of the methods used for target validation. The goal of correlation analysis is to analyze the correlation between the different affinities for target proteins after getting a series analog of small molecules of interest and the potency of causing the corresponding phenotype by these molecules. The keys in applying this method is to synthesize analogs of the bioactive small molecule, which has a wide range of activity (IC 50 or LD 50 values) spanning at least three orders of magnitude and to make the values more or less equally distributed over those three orders of magnitude of activity instead of being concentrated around one particular value [19]. In general, synthetic bioactive molecules can easily meet the above requirements; by contrast, obtaining sufficient numbers of natural bioactive products for analysis is difficult. In this study, the insecticidal activity (LD 50 value) of the dihydroagarofuran sesquiterpene polyesters vary in three orders of magnitude, but the number is insufficient. One important function of midgut epithelial cells of Lepidoptera insect is to transport K + from the haemolymph to the gut lumen. The process is performed by V-ATPase, which is located at the goblet cell apical membranes, as well as the V-ATPase's partner, the K + /nH + antiporter [20]. Then, the combined action of the V-ATPase and K + /2H + antiporter generates a transepithelial voltage [21]. Herein, the measurement of transmembrane potential difference can indicate the effect of a bioactive molecule on V-ATPase. We evaluated the influence of CV on transmembrane potential of goblet cell apical membranes of the sixth instar larvae of M. separata Walker. The results demonstrated that CV can induce depolarization of midgut apical membranes potential, that is, a decrease of the potential difference, which is another indication that CV can inhibit V-ATPase activity [22]. In addition to correlation analysis, genetic approaches can be employed for target validation. Through binding to a specific protein, the bioactive small molecule causes a cellular phenotype and loss of its function. Thus, over-expression of the target protein produce resistance to the small molecule; deletion of the target protein causes the same cellular phenotype with the small molecule; and reduction of the target protein causes hypersensitivity to the small molecule [19]. Subunit H has physiological function only after assembling with other subunits because V-ATPase is a multi-subunits enzyme. The over-expression of subunit H cannot induce the excess assembly of V-ATPase, which then causes resistance to the small molecule, whereas the reduction of subunit H decreases the assembly of V-ATPase, which causes hypersensitivity to small molecules. By using RNAi technology, we injected the dsRNA of subunit H into the third instar larvae of M. separata Walker, which significantly decreased the expression of subunit H. Moreover, silencing of the genes of subunit H induced the death of larvae, which showed the same symptoms as those induced by CV and other dihydroagarofuran sesquiterpene polyesters [23]. The target validation showed that subunit H of V-ATPase is one of the target proteins of the insecticidal dihydroagarofuran sesquiterpene polyesters. Based on the above results, we speculate that the mechanism of action of the insecticidal dihydroagarofuran sesquiterpene polyesters. Small molecular compounds are taken by larvae and then transported from the peritrophic membrane to the midgut cells. Then, the small molecules bind with subunit H of V-ATPase on the plasma membrane, which impacts the assembly of subunit H and other subunits so as to inhibit the activity; or the subunit H with small bioactive molecules become a part of V-ATPase, which lead to the malfunction of the whole enzyme. According to the function of V-ATPase, inhibition of its activity may lead to three kinds of results: (1) Amino acid cannot be transported to the gut cells, thereby influencing the protein synthesis and the function of the whole cell; (2) The high alkaline environment in the midgut cannot be maintained, thereby hindering numerous enzymes that function in high-pH environment; and (3) K + cannot enter the gut lumen from cells, thereby inducing accumulation of K + in cells. As a result, the osmotic pressure of cells will be imbalanced and excessive amount of water will enter the cells, resulting in swollen and cracked cells. Subsequently, the haemolymph will enter the gut lumen, which might be the reason for the loss of body fluid. Conclusions The correlation analysis of target protein in the current study, along with the RNAi and electrophysiologic assays in a previous study, showed that subunit H of V-ATPase is the target protein of insecticidal dihydroagarofuran sesquiterpene polyesters. Insects Laboratory-adapted M. separata (Walker) was obtained from the Institute of Pesticide Science, Northwest A & F University (NWAFU, Yangling, China). The strain was reared on wheat and corn leaves under laboratory conditions for about 20 years, and was never treated with insecticides. Bioassay of Insecticidal Activity Force-feeding bioassay was conducted to test the insecticidal activity of the 12 dihydroagarofuran sesquiterpene polyesters against the fifth instar larvae of M. separata [24]. The 12 tested compounds were dissolved in DMSO and diluted to 5 concentrations by using serial dilution method. Second-day fifth instar larvae were selected and narcotized by cotton ball dipped with ether in Petri dish. Then, 0.5 μL of tested compounds were pipetted into the mouthparts of each larva. The larvae were then transferred into a 24-well plate after swallowing the compounds. For each concentration of each compound, 12 larvae were tested with three replicates. The symptoms presented by the larvae were observed after forced feeding, and the mortality was calculated 24, 48, and 72 h later. At the same time, 20 larvae were randomly selected and weighed to calculate the average body weight. Finally, the LD50 (μg/g) of each compound was calculated according to the LC50 value, volume of compounds, and average weight of larvae. APN Activity Assays The midgut BBMV of M. separata was isolated according to the MgCl2 precipitation method [25], as modified by Ferre et al. [26]. The final pellet was dissolved in buffer C (150 mM NaCl, 5 mM EGTA, 1 mM PMSF, 20 mM Tris-HCl, and 1% CHAPS) [27]. The protein concentration of BBMV was measured by Bradford Assay. APN activity was measured according to the method published by Liang et al. [28], Watanabe et al. [29], Takesue et al. [30], and Silva-Filha et al. [31]. The enzyme assay system includes 1 mL of buffer (0.25 mol/L Tris-HCl, 26 M NaCl, pH 7.8), 10 μL of BBMV extract, and 1.6 μL of CV (30.2 mM, dissolved in DMSO). The final concentration of CV in the reaction solution is 47 μM according to the amount added into the buffer, as some CV has dissolved out from the buffer. After incubation of the mixture for 30 min at 37 °C, 16 μL of substrate (15.96 mg of Leu-pNA was dissolved in 1 mL of methanol) was added. The absorbance at 405 nm was measured after incubation for 60 min at 37 °C. The assay was repeated for three times, and DMSO was used as control. V-ATPase Activity Assays The V-ATPase activity was measured according to the method published by Tiburcy et al. [32]. The tested dihydroagarofuran sesquiterpene polyesters were dissolved in DMSO, and the final concentration of the polyesters in the reaction system was 100 μM. Bafilomycin A1 was used as the positive control with the final concentration of 3 μM. Midguts of sixth instar larvae of M. separata were removed from the larvae in the Ringer solution; the peritrophic membrane, gut contents, and malphigian tubules were discarded. The rear parts of the midguts were placed in ice-cold low-adhesion Eppendorf tubes and frozen in liquid nitrogen. Then, the midguts were homogenized and centrifuged twice according to the method published by Tiburcy et al. [32]. The V-ATPase assays were performed with three replications. A 160 μL reaction solution consisted of 50 μL of PO buffer (160 mM Tris-Mes, pH 6.9), 20 μL of MV buffer Bioassay of Insecticidal Activity Force-feeding bioassay was conducted to test the insecticidal activity of the 12 dihydroagarofuran sesquiterpene polyesters against the fifth instar larvae of M. separata [24]. The 12 tested compounds were dissolved in DMSO and diluted to 5 concentrations by using serial dilution method. Second-day fifth instar larvae were selected and narcotized by cotton ball dipped with ether in Petri dish. Then, 0.5 µL of tested compounds were pipetted into the mouthparts of each larva. The larvae were then transferred into a 24-well plate after swallowing the compounds. For each concentration of each compound, 12 larvae were tested with three replicates. The symptoms presented by the larvae were observed after forced feeding, and the mortality was calculated 24, 48, and 72 h later. At the same time, 20 larvae were randomly selected and weighed to calculate the average body weight. Finally, the LD 50 (µg/g) of each compound was calculated according to the LC 50 value, volume of compounds, and average weight of larvae. APN Activity Assays The midgut BBMV of M. separata was isolated according to the MgCl 2 precipitation method [25], as modified by Ferre et al. [26]. The final pellet was dissolved in buffer C (150 mM NaCl, 5 mM EGTA, 1 mM PMSF, 20 mM Tris-HCl, and 1% CHAPS) [27]. The protein concentration of BBMV was measured by Bradford Assay. APN activity was measured according to the method published by Liang et al. [28], Watanabe et al. [29], Takesue et al. [30], and Silva-Filha et al. [31]. The enzyme assay system includes 1 mL of buffer (0.25 mol/L Tris-HCl, 26 M NaCl, pH 7.8), 10 µL of BBMV extract, and 1.6 µL of CV (30.2 mM, dissolved in DMSO). The final concentration of CV in the reaction solution is 47 µM according to the amount added into the buffer, as some CV has dissolved out from the buffer. After incubation of the mixture for 30 min at 37˝C, 16 µL of substrate (15.96 mg of Leu-pNA was dissolved in 1 mL of methanol) was added. The absorbance at 405 nm was measured after incubation for 60 min at 37˝C. The assay was repeated for three times, and DMSO was used as control. V-ATPase Activity Assays The V-ATPase activity was measured according to the method published by Tiburcy et al. [32]. The tested dihydroagarofuran sesquiterpene polyesters were dissolved in DMSO, and the final concentration of the polyesters in the reaction system was 100 µM. Bafilomycin A1 was used as the positive control with the final concentration of 3 µM. Midguts of sixth instar larvae of M. separata were removed from the larvae in the Ringer solution; the peritrophic membrane, gut contents, and malphigian tubules were discarded. The rear parts of the midguts were placed in ice-cold low-adhesion Eppendorf tubes and frozen in liquid nitrogen. Then, the midguts were homogenized and centrifuged twice according to the method published by Tiburcy et al. [32]. The V-ATPase assays were performed with three replications. A 160 µL reaction solution consisted of 50 µL of PO buffer (160 mM Tris-Mes, pH 6.9), 20 µL of MV buffer (30 mM MgCl 2 and 0.8 mM sodium orthovanadate), 20 µL of KN buffer (160 mM KCl and 4 mM NaN 3 ), and 10 µL of DMSO/Bafilomycin A1/tested compounds. After 10 min of pre-incubation at 30˝C, 20 µL of Tris-ATP (8 mM) was added to start the reaction. The reaction was stopped after 60 min of incubation at 30˝C by freezing the samples in liquid nitrogen. To measure the V-ATPase activity, the produced inorganic phosphate was determined as described by Wieczorek et al. [33]. Expression and Purification of Subunit H of V-ATPase The gene of V-ATPase subunit H of M. separata was cloned through RACE technology [34]. To obtain enough recombinant protein for the binding affinity measurements, the CDS of V-ATPase subunit H was cloned to the pET-15b vector by adding a His tag at the N-terminal and then heterologously expressed in E. coli BL21 (DE3). The construction of expression vector was described by Li et al. [35]. When the OD 600 value of media reaches~0.6, IPTG was added into the LB broth at a final concentration of 0.4 mmol/L to induce the expression of recombinant protein at 18˝C for 18 h. Afterward, the bacterial cells were collected by centrifugation and resuspended with lysis buffer (20 mmol/L Tris-HCl, 300 mmol/L NaCl, and 5 mmol/L imidazole). Lysozyme and nuclease were added into the solution, followed by ice incubation for 30 min. Then, the cells were lysed by ultrasonic. After centrifugation at 12,000 rpm for 30 min (4˝C), the supernatant was used to purify the recombinant protein by Ni-NTA. The supernatant was loaded onto the Ni-NTA column, which was then washed using buffer (20 mmol/L Tris-HCl, 300 mmol/L NaCl, pH 8.0), with 20 mmol/L and 40 mmol/L imidazole in sequence. An elution buffer (20 mmol/L Tris-HCl, 300 mmol/L NaCl, 500 mmol/L imidazole, pH 8.0) was then used to elute all the binding proteins. SDS-PAGE electrophoresis analysis showed that the eluate proteins washed by the elution buffer contained a number of unspecific proteins. Therefore, a second Ni-NTA purification was necessary. After dialysis at 4˝C for 24 h, the eluate was loaded to the Ni-NTA column. Then, each fraction washed by buffer (20 mmol/L Tris-HCl, 300 mmol/L NaCl, pH 8.0) containing 40, 80, 100, and 200 mM imidazole were collected and detected by SDS-PAGE electrophoresis (Liuyi Biotechnology, Beijing, China). Interaction between Subunit H and Small Molecules The purified recombinant protein was concentrated using an ultra membrane (Merck Millipore Amicon, Shanghai, China), and then diluted to 16 µM. After that, the protein was labeled by RED-NHS, according to the user 1 s manual of Monolith NT.115 (Nano Temper, Munich, Germany). The labeled protein sample was diluted by buffer (20 mmol/L Tris-HCl, 300 mmol/L NaCl, pH 8.0) to ensure the florescence value was between 200 and 1500. Furthermore, Tween-20 (0.1% final concentration) was used to enhance the sample quality. Each dihydroagarofuran sesquiterpene polyester was resolved in DMSO, and 16 different concentrations were prepared by serial dilution. After mixing the 16 samples with labeled protein at a ratio of 1:1, MST capillaries were filled and placed on a tray for the binding affinity measurements. The K D values of each compound with the subunit H were calculated according to the binding curve [36].
2016-04-07T22:52:55.727Z
2016-03-01T00:00:00.000
{ "year": 2016, "sha1": "4e7e064ca460e64d64a5a99bd30124b5f1d77f89", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6651/8/3/79/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4e7e064ca460e64d64a5a99bd30124b5f1d77f89", "s2fieldsofstudy": [ "Biology", "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
9392669
pes2o/s2orc
v3-fos-license
Hippo pathway coactivators Yap and Taz are required to coordinate mammalian liver regeneration The mammalian liver has a remarkable capacity for repair following injury. Removal of up to two-third of liver mass results in a series of events that include extracellular matrix remodeling, coordinated hepatic cell cycle re-entry, restoration of liver mass and tissue remodeling to return the damaged liver to its normal state. Although there has been considerable advancement of our knowledge concerning the regenerative capacity of the mammalian liver, many outstanding questions remaining, such as: how does the regenerating liver stop proliferating when appropriate mass is restored and how do these mechanisms relate to normal regulation of organ size during development? Hippo pathway has been proposed to be central in mediating both events: organ size control during development and following regeneration. In this report, we examined the role of Yap and Taz, key components of the Hippo pathway in liver organ size regulation, both in the context of development and homeostasis. Our studies reveal that contrary to the current paradigms that Yap/Taz are not required for developmental regulation of liver size but are required for proper liver regeneration. In livers depleted of Yap and Taz, liver mass is elevated in neonates and adults. However, Yap/Taz-depleted livers exhibit profound defects in liver regeneration, including an inability to restore liver mass and to properly coordinate cell cycle entry. Taken together, our results highlight requirements for the Hippo pathway during liver regeneration and indicate that there are additional pathways that cooperate with Hippo signaling to control liver size during development and in the adult. INTRODUCTION How organ size is regulated in mammals during development and how tissue homeostasis is maintained in adults is a fundamental question that is relevant for normal organ function as well as in pathological situations such as cancer. Several theories have been put forth that address how organs achieve their normal size during embryogenesis, how they regulate their size proportionally with overall body size during neonatal and prepubescent growth, and how cell death and cell proliferation is balanced in adults to maintain tissue homeostasis and normal organ size. 1 Although different theories differ among mechanisms that underlie organ size control and homeostasis, they all converge on fundamental cellular processes such as cell division and survival. Hippo signaling has emerged as a key pathway in regulating mammalian organ size. [2][3][4] First described in the fruit fly Drosophila, the Hippo signaling pathway has at its core a kinase cascade that functions to negatively regulate the activities of two key transcriptional coactivators, Yap and Taz. When the Hippo signaling pathway is active, Yap and Taz are phosphorylated by the Lats1/2 kinases resulting in their destabilization and retention in the cytoplasm. Conversely, when the Hippo pathway is inactive, Yap and Taz enter the nucleus where they interact with a variety of transcription factors that control proliferation, survival and differentiation. Evidence that the Hippo signaling pathway is involved in mammalian organ size control was first obtained by examining the effect of hyperactivation of Yap. 5,6 By expressing a mutant form of Yap that cannot respond to Hippo pathway inhibitory signals, it was shown that Yap can drive abnormal increases in organ size, most notably in the liver. Subsequently, these findings were extended by inactivation of upstream Hippo pathway components in a variety of tissues, thereby inducing endogenous Yap (and presumably Taz). These studies indicated that Hippo signaling was integral in organ size regulation in the mammalian liver 7-10 and heart. 11 Taken together, these studies suggested that Hippo signaling acts as a link between organ size sensing mechanisms and regulation of cell survival and proliferation through modulating the activities of Yap and Taz. Recently, it has become appreciated that the Hippo signaling pathway is not the only pathway that controls the activity of Yap and Taz: inputs from the cytoskeleton, extracellular matrix and direct interaction with molecules outside of the core Hippo signaling pathway such as angiomotin, have also been shown to regulate subcellular localization and transcriptional activities of Yap and Taz. 12,13 These findings have broadened our current understanding of Yap and Taz regulation, and suggest alternative inputs into regulation of organ size via Yap/Taz. In addition, these studies highlight the complexity of Yap/Taz regulation at the molecular level and provide additional opportunity for understanding mechanisms that control mammalian organ size regulation. Given the proposed importance of Yap and Taz in controlling mammalian organ size, we sought to directly determine their requirement using the mouse liver as a model system. To that end, we generated mice that lack both Yap and Taz in hepatocytes and biliary epithelial cells (Yap/Taz liver conditional knockout). Yap/Taz liver conditional knockout mice are viable and fertile and their liver to body weight ratio is enlarged. The enlarged livers of Yap/Taz mutants had significantly increased numbers of proliferating hepatocytes accompanied by indicators of liver injury, including elevated serum levels of the liver enzymes alanine transaminase (ALT) and aspartate transaminase (AST). Yap/Taz-mutant livers also regenerated less efficiently than wild-type controls following two-third partial hepatectomy (PHx). Yap/Taz depletion resulted in reduced numbers of hepatocytes incorporating bromodeoxyuridine (BrdU) and an inability to completely recover liver mass following two-third PHx. Taken together, our results indicate that Yap and Taz are not essential for achieving relatively normal liver to body weight ratios during normal development and in unstressed adults, however, they are required to mount efficient regenerative responses and for achieving complete restoration of liver mass following PHx. MATERIALS AND METHODS Generation and breeding of yap fl/fl and taz fl/f mice Yap fl/fl and taz fl/fl mice were previously described. 14 These mice were bred to Albumin-cre mice, 15 followed by backcrossing to homozygous-floxed animals to generate liver-specific deletion of these genes. Resulting mutants were labeled in this paper as yap Δ /taz Δ . The genetic background of all mice is C57BL/6. All mice were housed in MD Anderson conventional facility with a 12-h light/dark schedule and food and water supply. All procedures were approved by the University of Texas, MD Anderson Cancer Center Animal Care and Use Committee. Quantitative PCR Total mRNAs were extracted from liver tissues with TRIZOL reagent (Invitrogen, 15596-025) and purified by Qiagen RNeasy Mini Kit 74104. Quantitative RT-PCR analysis was carried out using One-Step TaqMan gene expression assays (Applied Biosystems, Foster City, CA, USA) according to the manufacturer's instructions. Assay IDs for yap and taz are Mm00494237 and Mm00513560, respectively. Two-third PHx Five-to eight-weeks-old mice were used for performing PHx. The procedure for two-third PHx has been previously described. 16 Briefly, the mice were anaesthetized with and after opening the abdomen, the medium and left lobes were tied by silk suture to stop the blood low, followed by lobe resection. The abdomen was then sealed with silk sutures and the mouse was put on 37°C incubator for recovery. Mice were killed on 0 h (0 h), 6 h, 24 h, 48 h, 72 h and 7 days (7 d) after PHx and the livers were harvested for protein extractions and paraffin sections. Fourteen-day time point was done only for measurement of liver/body weight ratio. Immunostaining Liver tissues were fixed in 4% PFA overnight at 4°C and processed for paraffin embedding. Paraffin sections were cut at 5 μm. For CK19 staining, 10 mM sodium citrate (pH 6) was used for antigen retrieval. After serum blocking, tissue sections were incubated with primary antibody cytokeratin19 (CK19, rabbit, gift from Texas Children's Hospital Dr Milton Finegold's lab) at 4°C overnight. Next day after phosphate-buffered solution wash, sections were incubated with fluor-488 secondary antibody. Cell nuclei were stained with DAPI. Tissue sections were sealed and imaged with confocal microscopy. For labeling of cells undergoing DNA synthesis, 0.01 ml g − 1 body weight of a 3 mg ml − 1 solution of BrdU (Sigma-Aldrich, St Louis, MO, USA, B9285) in phosphate-buffered solution was injected IP 2 h before killing the mice. BrdU staining was carried out using the BrdU In-Situ Detection Kit (BD Biosciences Pharminigen, San Diego, CA, USA, 550803). For quantification of BrdU-positive hepatocytes, five different areas in each sample were photographed and counted. The result was statistically analyzed by one-way Anova. Protein extraction and analysis Total protein was extracted by RIPA buffer with present of both protease inhibitor (Roche Molecular Systems, Inc., Pleasanton, CA, USA, 04693132001) and phosphatase inhibitor (Roche Molecular Systems, Inc., 04906837001). Cytoplasmic and nuclear portion of protein were extracted by NE-PER Nuclear and Cytoplasmic Extraction Reagents (Thermo Fisher Scientific, Waltham, MA, USA, #78835). Protein concentration was measured with BCA Protein Assay Reagent (Thermo Fisher Scientific, 23227). Western blot Protein samples were denatured by being boiled for 5 min with protein-loading buffer containing 5% beta-metheltransferase. Western blots were run on 10% acrylamide gels followed by semi-dry polyvinylidene fluoride (PDVF) membrane transfer. PVD membranes were then blocked in 5% milk in tris buffered saline/tween 20 (TBST) for 1 h at room temperature followed by primary antibody incubation in 5% bovine serum albumin (BSA) at 4°C overnight. Secondary antibody was incubated at room temperature for 30 min. After wash, the membranes were developed by enhanced chemiluminescence (ECL, Perkin Elmer, Waltham, MA, USA, NEL103001EA), and signals were detected by X-ray film. Primary antibodies used were phospho-Yap/ Taz Efficient deletion of Yap and Taz in the mouse liver To generate mice that lack Yap and Taz in the biliary epithelial cell and hepatocyte lineages of the liver, we crossed conditional alleles of Yap and Taz 14 to mice that contain an Albumin-cre transgene that directs cre-mediated recombination in fetal hepatic progenitor cells and in adult hepatocytes. 15 To confirm depletion of Yap and Taz in the liver, we performed qRT-PCR and western analysis on extracts from wild-type and age-matched Yap/Taz mutants (yap Δ /taz Δ ) ( Figure 1a). Both Yap and Taz levels were efficiently reduced in yap Δ /taz Δ livers as evidenced by significantly diminished RNA and protein levels. Hepatomegaly, liver injury and compensatory proliferation in yap Δ /taz Δ mutants To examine the consequences of Yap and Taz depletion on liver size, we killed mice at different time points, harvested their livers and compared liver to body weight ratios. At 2 months of age, yap Δ /taz Δ livers were significantly larger (20%) then their wild-type counterparts. Liver enlargement was also observed at 1 year of age when yap Δ /taz Δ livers had increased in size to 50% larger than their wild-type counterparts ( Figure 1b). Histologically, yap Δ /taz Δ livers exhibited signs of hepatic macrophages in area of necrosis suggestive of hepatocyte injury (Figure 1c). To investigate this observation further, we first compared serum levels of AST and ALT ( Figure 1d) in 2-month-old yap Δ /taz Δ mutants relative to control mice. Indeed, yap Δ /taz Δ -mutant mice had significantly elevated serum AST and ALT levels indicating liver injury in these mice. Liver injury is frequently accompanied by compensatory hepatocyte proliferation. To determine whether yap Δ /taz Δ mutants have increased rates of hepatocyte cell cycle entry, we pulsed mice with BrdU and quantified numbers of hepatocytes that incorporated BrdU in their nuclei, indicating entry into S-phase. Compared with the wild-type controls, yap Δ /taz Δ -mutant mice had significantly elevated numbers of BrdU incorporating hepatocytes (Figure 1e), suggesting that compensatory proliferation occurs upon Yap/Taz depletion. Inflammation, biliary tract defects and adenoma formation in yap Δ /taz Δ -mutant livers Albumin-cre is active in fetal hepatocytes, the common progenitor to biliary epithelial cells and hepatocytes in the adult. 15 To examine the consequences of Yap/Taz depletion on the biliary epithelial cell lineage, we examined bile ducts of wild-type and yap Δ /taz Δ mice by histology ( Figure 2a) and immunostaining ( Figure 2b). Inflammation around the bile ducts is found in yap Δ /taz Δ -mutant livers (Figure 2a). Wellformed bile ducts were observed in periportal regions of wildtype mice that were CK19-positive; in contrast, bile ducts in yap Δ /taz Δ -mutant livers are irregularly shaped and less wellformed in comparison (Figure 2b). Unexpectedly, at 1 year of age yap Δ /taz Δ -mutant mice developed liver adenomas ( Figure 2c and d). We did not observe other liver tumors in these mice at 1 year of age, including hepatocellular carcinoma or cholangiocarcinoma. Regulation of Hippo signaling during liver regeneration It has previously been reported that Yap and Taz activities are modulated following liver injury, including after PHx. 17 To confirm and extend these findings, we examined the phosphorylation status of Yap, Taz and upstream components at different time points after PHx. Consistent with previous findings, we observe that both Yap and Taz phosphorylation transiently decrease after PHx (Figure 3a and b), suggesting that they become activated in response to PHx. To investigate this observation further, we assayed nuclear and cytoplasmic localization of Yap during PHx by western analysis (Figure 3c). Prior to surgery, Yap is predominantly cytoplasmic, however, at 6 h post-hepatectomy, equal levels of Yap are seen in nuclear and cytoplasmic fractions, indicative of elevated Yap activities. Elevated nuclear levels of Yap are also observed at 24, 48 and 72 h post-hepatectomy although the relative levels of nuclear Yap decrease due to increased cytoplasmic Yap localization during this period. In contrast to previous reports, 17,18 we did not observe marked alterations in phosphorylation of upstream Hippo pathway components Mst1/2 or Lats1/2 (Figure 3d), although our analysis may not detect changes in their activities that are mediated by phosphorylation at residues not detected by the antibody reagents used in our study or by nonphosphorylation-mediated mechanisms. Inefficient liver regeneration in yap Δ /taz Δ mice To assess the requirement(s) for Yap and Taz in controlling compensatory proliferation following PHx, we performed twothird PHx in control and yap Δ /taz Δ mice. Liver regrowth was blunted in yap Δ /taz Δ mice at all time points after PHx, including at 1 and 2 weeks after surgery, time points when wild-type mice had completely regained liver mass (Figure 4a). This defect in restoration of liver mass was paralleled by a diminished entry of hepatocytes into S-phase as assayed by BrdU incorporation (Figure 4b and c). At 48 h, where BrdU incorporation in wild-type livers is maximal, yap Δ /taz Δ livers have significantly fewer hepatocytes that incorporate BrdU. This effect is also observed at 72 h post PHx. At 7 days after surgery, BrdU incorporation has returned to baseline levels in both wild-type and yap Δ /taz Δ mutants but remains significantly higher in yap Δ /taz Δ mutants, consistent with their elevated levels prior to surgery. Defects in BrdU incorporation in yap Δ / taz Δ mice are mirrored by defects of cell cycle progression marked by levels of PCNA and cyclinD1 (Figure 4d). In wildtype mice, PCNA and cyclinD1 are markedly induced following PHx with peak levels seen at 48 and 24 h, respectively. In contrast, yap Δ /taz Δ -mutant mice show markedly elevated baseline levels of cyclinD1 and reduced induction of both cyclinD1 and PCNA after PHx. Taken together, these results indicate that although Yap/Taz are not absolutely required for liver regeneration, they are required for efficient cell cycle entry and for complete restoration of liver size following PHx. DISCUSSION The Hippo pathway and its coactivators Yap and Taz have been implicated in regulating organ size, tissue regeneration and are activated in a wide variety of solid tumors. In this study, we have examined the role of Yap and Taz in controlling mammalian liver size during development and in the perinatal period. Our findings indicate that Yap/Taz are not obligate regulators of hepatocyte proliferation in the embryo or adult. In fact, yap Δ /taz Δ livers are larger relative to their wild-type counterparts. In contrast, during liver regeneration, yap Δ /taz Δ mutants are defective in liver regrowth relative to their wildtype counterparts indicating an essential role for Yap and Taz in mediating responses to acute liver injury that require compensatory proliferation. The implications of this study on our current understanding of Hippo pathway in control of mammalian organ size, homeostasis, regeneration and disease are discussed below. Organ size control Activation of Yap, either by expression of a mutant form of Yap that is not effectively inhibited by Hippo pathway kinases, 5,6 or by deletion of the Hippo pathway components, NF2, Sav1, Mst1/2 and Lats1/2, 7-11,19-21 has profound effects on organ size. For example, in both the heart and liver, activation of Yap by these means results in increased organ size, both in the embryo and adult. These observations have led to the proposal that the Hippo pathway is dynamically regulated during embryogenesis and during perinatal periods to control Yap (and Taz) activities. 22 According to these models, as yet undermined organ size control signals impact Yap/Taz activities through the Hippo signaling pathway. When an organ has not yet achieved its proper size, Hippo pathway is attenuated, thereby allowing for Yap/Taz-mediated proliferation to ensue. When proper organ size has been achieved, Hippo pathway activity is enhanced resulting in inhibition of Yap/Taz and stopping organ growth. Although this model has received support from transgenic and knockout studies mentioned above that activate Yap and Taz, relatively little is known about whether Yap and Taz are required to achieve proper organ sizes in neonates and in the perinatal period. Several studies have investigated this question by deleting Yap, for example, in the heart and liver, and current data suggest that Yap is not required for organ size control in these tissues. One potential explanation for these observations is that loss of Yap is compensated for by the maintained presence of Taz in these experiments. Accordingly, depletion of Taz in the context of Yap knockout would result in defects in organ size with the prediction that organs cannot grow to their proper sizes in the combined absence of Yap and Taz. Our results clearly indicate that this is not the case, at least for the mammalian liver. Hence, other mechanism(s) must be operating that mask an essential requirement for Yap/Taz in organ size control or that Yap/Taz are not integral components of pathways that control organ size in embryos or in pre-adult growth stages. Organ homeostasis In adults, organ size is maintained by a balance of cell proliferation and cell death. 1 In some tissues such as the skin and intestine, stem cells fuel tissue renewal through production of transient amplifying cells that differentiate into mature cells of the skin and intestine. This rapid cell growth is balanced equally by cell death to maintain organ size. In other tissues such as the heart and liver, cell division is normally kept at very low levels. In contrast to the normal liver, we observe highly elevated BrdU incorporation rates in yap Δ /taz Δ livers, which was also previously observed in Yap-mutant liver tissues. 23 This increase in hepatocyte proliferation likely contributes to the increased liver/body weight ratio in yap Δ /taz Δ mutants. As with the Yap mutants, yap Δ /taz Δ livers have defective biliary structures that most likely result in accumulation of toxic bile acids that result in liver injury and hepatocyte cell death that trigger compensatory proliferation. Indeed, we observe histological signs of hepatocyte macrophages in area of necrosis and elevated serum AST and ALT levels in Yap/Taz-mutant mice. These observations indicate that Yap/Taz most likely affect liver homeostasis indirectly through defects in the biliary epithelial cell development. Organ regeneration The Hippo signaling pathway has been shown to be important in regulating regeneration of several tissues, including the intestine 24 and the heart. 11 In the intestine, Yap is activated in response to injury and is required for intestinal regeneration. In the heart, Hippo signaling acts as a barrier for regeneration. In mice, normal cardiomyocytes lose the ability to mount a regenerative response within a week after birth and that is accompanied by elevated Hippo signaling and reduced Yap/Taz activation. Relieving this inhibition restores the ability of cardiomyocytes to regenerate and to repair cardiac injury. In the liver, previous studies have implicated Hippo signaling in compensatory proliferation after PHx, 17 but direct evaluation of the role of Yap and Taz in this process has not been demonstrated. Here we have shown that Yap/Taz are indeed required for normal liver regeneration, but not absolutely essential. Yap Δ /taz Δ -mutant hepatocytes can participate in a regenerative response, albeit not as efficiently as wild-type hepatocytes. Whether this defective regeneration is a direct result of Yap/Taz inactivation in hepatocytes or is an indirect result of defects in biliary epithelial cells is unclear at this time. Additional experiments, such as deletion of Yap/Taz selectively in adult hepatocytes will be required to address this question. That yap Δ /taz Δ livers can mount a regenerative response results suggest that other pathways such as growth factor activation may partially compensate for Yap/Taz loss in the regenerating liver. Our results and those of others 17 have clearly shown that Yap/Taz phosphorylation and cytoplasmic/nuclear localization is dynamically regulated during liver regeneration. Mechanistically, it is unclear how Yap and Taz are regulated during this process. Previous studies suggested that alterations in the kinase activity of upstream Hippo pathway components contribute to Yap/Taz activation. Indeed, genetic and/or pharmacological inhibition of Yap/Taz by affecting Mst1/2 kinase activities has been shown to augment liver regeneration. 18,25 For reasons that are not clear at this time, we did not observe significant alterations in the Hippo pathway components Mst1/2 or Lats1/2, suggesting that other pathways are regulating Yap/Taz subcellular localization in this context. It may be possible that other kinases besides Mst1/2 or Lats1/2 are required to inhibit Yap/Taz during liver regeneration and there is some evidence from other systems that this may be the case. 26 In addition, it is well appreciated that an early step in liver regeneration is remodeling of the extracellular matrix. 27 As matrix composition and stiffness is a well-known regulator of Yap/Taz nuclear localization and activity, 12 one attractive model is that matrix remodeling induces changes in Yap/Taz activity during early stages of liver regeneration. Further experiments are required to address this question. Liver disease Activation of Yap/Taz in the mouse liver results in liver injury and eventually formation of liver cancer. [5][6][7] In addition, Yap and/or Taz activation occurs from bile acid-induced liver injury and in non-alcoholic hepatosteatosis. 23,28,29 These and related findings suggest that Yap/Taz activation is a common event in liver disease and that sustained Yap/Taz activation contribute to disease progression to liver cancer. Indeed, several studies have shown that Yap inhibition delays disease progression in genetically engineered mouse models of hepatocellular carcinoma 30 and that Yap inhibition is effective in treating advanced hepatocellular carcinoma. 31 Our observation that yap Δ /taz Δ mutants develop liver adenomas is somewhat paradoxical given previous findings that implicate Yap/Taz as oncogenes in hepatocarcinogenesis. A plausible explanation for this observation is that defects in the biliary system of yap Δ / taz Δ mutants leads to chronic liver injury setting the stage for liver cancer formation independently of Yap/Taz activation. There is evidence that in human hepatocellular carcinomã 20-30% of patients have a gene expression signature that suggests activation of Yap/Taz. 31,32 Similarly, immunohistochemical staining for Yap levels and localization have demonstrated that 20-40% of human hepatocellular carcinomas have high levels and nuclear localization of Yap (reviewed in ref. 33 However, these studies also suggest that hepatocellular carcinoma can develop without Yap and/or Taz activation. Another frequent genetic alteration in hepatocellular carcinoma is activation of the Wnt pathway, most often through somatic mutations that stabilize β-catenin. We and others have shown that human hepatocellular carcinomas that harbor activated Wnt pathway components are mutually exclusive with those that have a loss of Hippo signaling signature. 31,32 This observation suggests that liver cancer can occur without Yap/ Taz activation and that these liver cancers constitute distinct molecular subtypes and most likely would be less responsive to targeted anti-Yap/Taz therapies. In addition, we propose that one mechanism that may contribute to adenoma formation that we observed in yap Δ /taz Δ mutants might be via activation of the Wnt pathway. Additional experiments would be required to test this hypothesis. In summary, we have shown that the Hippo signaling pathway coactivators Yap and Taz are not essential for achieving proper liver size during development or in the perinatal period but are required to mount an effective regenerative response following PHx. Hence, molecular mechanisms that function to regulate liver size in embryos and in young adults remain undefined. Our results also show that Hippo signaling and Yap/Taz likely function in the context of other pro-regenerative programs that promote liver repair. Identification of these pathways and determining how they interface with Hippo signaling remains an important direction of future research in this area. Finally, although Yap and Taz are clearly potent inducers of hepatocellular carcinoma and frequently deregulated in liver diseases, our results demonstrate that Yap/Taz are not obligate oncogenes in the context of liver cancer. However, that we did not observe malignant hepatocellular carcinomas suggests that Yap and Taz may have a general role in liver cancer progression and that Yap/Taz-targeted therapies may be generally useful to inhibit tumor progression in the context of liver cancers. Additional studies will be required to address these and other important questions in the future.
2018-01-05T18:01:20.074Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "92b57075875a21359b3d6a71788f2ec1e7712633", "oa_license": "CCBYNCND", "oa_url": "https://www.nature.com/articles/emm2017205.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "92b57075875a21359b3d6a71788f2ec1e7712633", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
252143754
pes2o/s2orc
v3-fos-license
Melatonin Inhibits hIAPP Oligomerization by Preventing β-Sheet and Hydrogen Bond Formation of the Amyloidogenic Region Revealed by Replica-Exchange Molecular Dynamics Simulation The pathogenesis of type 2 diabetes (T2D) is highly related to the abnormal self-assembly of the human islet amyloid polypeptide (hIAPP) into amyloid aggregates. To inhibit hIAPP aggregation is considered a promising therapeutic strategy for T2D treatment. Melatonin (Mel) was reported to effectively impede the accumulation of hIAPP aggregates and dissolve preformed fibrils. However, the underlying mechanism at the atomic level remains elusive. Here, we performed replica-exchange molecular dynamics (REMD) simulations to investigate the inhibitory effect of Mel on hIAPP oligomerization by using hIAPP20–29 octamer as templates. The conformational ensemble shows that Mel molecules can significantly prevent the β-sheet and backbone hydrogen bond formation of hIAPP20–29 octamer and remodel hIAPP oligomers and transform them into less compact conformations with more disordered contents. The interaction analysis shows that the binding behavior of Mel is dominated by hydrogen bonding with a peptide backbone and strengthened by aromatic stacking and CH–π interactions with peptide sidechains. The strong hIAPP–Mel interaction disrupts the hIAPP20–29 association, which is supposed to inhibit amyloid aggregation and cytotoxicity. We also performed conventional MD simulations to investigate the influence and binding affinity of Mel on the preformed hIAPP1–37 fibrillar octamer. Mel was found to preferentially bind to the amyloidogenic region hIAPP20–29, whereas it has a slight influence on the structural stability of the preformed fibrils. Our findings illustrate a possible pathway by which Mel alleviates diabetes symptoms from the perspective of Mel inhibiting amyloid deposits. This work reveals the inhibitory mechanism of Mel against hIAPP20–29 oligomerization, which provides useful clues for the development of efficient anti-amyloid agents. Introduction Diabetes is globally a growing burden on public health. There were 463 million adults with diabetes in 2019 [1], of which 90% had type 2 diabetes (T2D) mellitus [2]. Most of T2D patients show amyloid deposits in pancreatic tissues whose main component is human islet amyloid polypeptide (hIAPP) or amylin [3,4]. Islet amyloid polypeptide is also detected in pancreas biopsies from patients with recent onset T1D mellitus [5]. hIAPP is co-secreted with insulin by pancreatic β cells [6], and its aggregation and consequent aggregates are closely related to β-cell failure and insulin deficiency [7][8][9]. To inhibit hIAPP oligomerization is a promising strategy to prevent the pathological process of hIAPP. Inhibitors such as small molecules [22][23][24][25][26][27][28][29][30], nanoparticles [31][32][33], short peptides [34][35][36], etc. have been developed to act on hIAPP misfolding and oligomerization. Scherzer-Attali et al. found that the quinone-tryptophan hybrids NQTrp and Cl-NQTrp showed a strong inhibition of hIAPP fibril formation (respectively, 85% and 75% inhibition in fibrils) at a low concentration (2:1 peptide molar excess) [23]. Tang et al. showed that cloridarol can inhibit hIAPP aggregation from its monomeric and oligomeric states and lead to the fibril reduction and cell viability increment. Further MD simulations revealed the binding of cloridarol results from a combination of hydrophobic interactions, aromatic stacking, and hydrogen bonding [30]. Wang et al. demonstrated that graphene quantum dots (GQDs) inhibit hIAPP fibrillization and eliminate the toxic intermediates in vitro. GQDs can also mitigate the aggregation and the damage elicited by IAPP in vivo; the strong binding of amphiphilic GQDs to IAPP converts coexisting helix and β-hairpin conformations into random coils in silicon [31]. These experimental and computational studies expand our understanding of hIAPP aggregation inhibition. Melatonin (Mel), a human endogenous small molecule, is mainly synthesized by the pineal gland. It is essential to maintaining normal biological functions, such as antidiabetic, antioxidant, and anti-obesity activities [37], and its levels can be increased by physical exercise [38,39]. Yang et al. showed that Mel helps restore intestinal permeability by suppressing ERK/MLCK-and ROCK/MCLP-dependent MLC phosphorylation in diabetic rats [40]. Ergenc et al. found that melatonin administration may reverse depressive and anxiety-like behaviors in diabetic rats, which is mediated by the attenuation of oxidative stress, age, rage, and S100B levels in the hippocampus and prefrontal cortex [41]. Costes et al. reported that the activation of Mel signaling can alleviate β-cell loss and the dysfunction associated with molecular stress present in T2D [42]. Jung et al. showed that Mel can regulate the expression and oligomerization of amylin in rat INS-1E cells, which improves the proliferation and cellular functions of pancreatic β cells [43]. These previous studies prove that Mel is widely involved in the pathogenesis of T2D. Interestingly, Aarabi et al. found that Mel can significantly inhibit amyloid formation and destabilize the preformed fibrils of amylin [44]. However, the inhibitory mechanism at the atomic level remains elusive. Here, we performed a replica-exchange molecular dynamics (REMD) simulation to investigate the inhibitory effect of Mel on hIAPP oligomerization by using the peptides of amyloidogenic region hIAPP 20-29 as templates. The conformational ensembles and the involved key interactions of the hIAPP 20-29 octamer in the absence and presence of Mel molecules were studied. Then, the influence and binding affinity of Mel on the preformed hIAPP 1-37 fibrillar octamer were examined by conventional MD simulation. Our results show that Mel inhibits the β-sheet and backbone hydrogen bond formation of hIAPP [20][21][22][23][24][25][26][27][28][29] , and as a result, the oligomeric conformations are remodeled and less compacted. A detailed peptide-Mel interaction analysis revealed the important roles of hydrogen bonding and π-π and CH-π interactions during amyloid inhibition. Results and Discussion Two systems were studied using REMD: the isolated hIAPP20-29 octa hIAPP20-29 octamer with Mel, respectively, labeled as the hIAPP20-29 and hIAPP systems. The other two systems were studied using conventional MD: the hIAPP1-37 fibrillar octamer and hIAPP1-37 fibrillar octamer with Mel, respectivel as the hIAPP1-37 and hIAPP1-37 + Mel systems. The molar ratio of hIAPP peptid molecules was 1:4, consistent with the previous experimental study [44]. Th setup is shown in Figure 1. More details are given in the Supplementary Materi REMD simulations were performed on a total of 48 replicas whose tem ranged from 305 to 425 K, and the analysis used the data at 310 K. The converge REMD simulation data was examined at two different time intervals (150-200 200 ns) by comparing three parameters as follows: the probability of different s structures, PDF of the peptide end-to-end distance, and PDF of the H-bond nu the hIAPP20-29 octamer. As shown in Figures S1 and S2, these param well-converged, and the analysis in the main text is based on the converged dat Mel Significantly Reduces β-Sheet Formation of hIAPP20-29 To examine the influence of Mel on the secondary structure of the hIAP tamer, Figure 2a presents the populations of different secondary structures. addition of Mel, the coil population increases from 52% to 60% and β-sheet dro icantly from 16% to 2%; the bend and turn contents are also increased, and β-b helix are almost unchanged. It indicates that Mel can significantly reduce βmation in hIAPP20-29. This observation is consistent with the previous Thioflav rescence study in which the fluorescence signal was significantly decreased wh aggregation was treated with Mel [44]. To further identify which amino acid i fected, we calculated the dominant secondary structure (coil and β-sheet) prob individual residues in Figure 2b. The coil probability of all the residues increase addition of Mel except for L27 (the coil probability of terminals S20 and S29 is 1 the residues in the absence of Mel have a relatively high probability to form among which the hydrophobic residues I26, A25, and L27 have the highest pro of 31%, 28%, and 23%, respectively. The β-sheet probabilities dramatically de all the residues accompanied by Mel, and the hydrophobic residues I26, A25 have the most β-sheet reduction. REMD simulations were performed on a total of 48 replicas whose temperatures ranged from 305 to 425 K, and the analysis used the data at 310 K. The convergence of the REMD simulation data was examined at two different time intervals (150-200 and 150-200 ns) by comparing three parameters as follows: the probability of different secondary structures, PDF of the peptide end-to-end distance, and PDF of the H-bond number for the hIAPP 20-29 octamer. As shown in Figures S1 and S2, these parameters are well-converged, and the analysis in the main text is based on the converged data. Mel Significantly Reduces β-Sheet Formation of hIAPP [20][21][22][23][24][25][26][27][28][29] To examine the influence of Mel on the secondary structure of the hIAPP 20-29 octamer, Figure 2a presents the populations of different secondary structures. With the addition of Mel, the coil population increases from 52% to 60% and β-sheet drops significantly from 16% to 2%; the bend and turn contents are also increased, and β-bridge and helix are almost unchanged. It indicates that Mel can significantly reduce β-sheet formation in hIAPP [20][21][22][23][24][25][26][27][28][29] . This observation is consistent with the previous Thioflavin T fluorescence study in which the fluorescence signal was significantly decreased when hIAPP aggregation was treated with Mel [44]. To further identify which amino acid is most affected, we calculated the dominant secondary structure (coil and β-sheet) probability of individual residues in Figure 2b. The coil probability of all the residues increases with the addition of Mel except for L27 (the coil probability of terminals S20 and S29 is 100%). All the residues in the absence of Mel have a relatively high probability to form β-sheet, among which the hydrophobic residues I26, A25, and L27 have the highest probabilities of 31%, 28%, and 23%, respectively. The β-sheet probabilities dramatically decrease for all the residues accompanied by Mel, and the hydrophobic residues I26, A25, and L27 have the most β-sheet reduction. Mel Reduces Hydrogen Bond Formation of hIAPP20-29 and Transforms Peptide Conformations into Less Compacted Oligomers The conformational changes under the influence of Mel were explored by calculating the probability density function (PDF) of the Cα-atom root mean square deviation (RMSD) of hIAPP20-29. As shown in Figure 3a, the RMSD of isolated hIAPP mainly covers the range of 1.2-1.7 nm, and with the addition of Mel, the range expands to 2.0-3.1 nm. The obvious increment of the RMSD range and deviation value relative to the initial structure suggests a substantial enrichment of the structural diversity for the hIAPP20-29 octamer induced by Mel. The PDF of the peptide end-to-end distance in Figure 3b shows that the peptide chains become less extended in the company of Mel. The hydrogen bond (H-bond) number of peptides (Figure 3c,d) shows that, with the addition of Mel, the number of H-bonds between the main chains is greatly reduced and that of the side chains also has a small reduction. This indicates that Mel effectively blocks the formation of H-bonds between peptide backbones, unconducive to the formation of an on-pathway hIAPP oligomer. It also provides an explanation for the hIAPP20-29 β-sheet inhibition of Mel, as it is able to alter the backbone dihedral angle and simultaneously hinder the formation of backbone H-bonds. Mel Reduces Hydrogen Bond Formation of hIAPP 20-29 and Transforms Peptide Conformations into Less Compacted Oligomers The conformational changes under the influence of Mel were explored by calculating the probability density function (PDF) of the Cα-atom root mean square deviation (RMSD) of hIAPP [20][21][22][23][24][25][26][27][28][29] . As shown in Figure 3a, the RMSD of isolated hIAPP mainly covers the range of 1.2-1.7 nm, and with the addition of Mel, the range expands to 2.0-3.1 nm. The obvious increment of the RMSD range and deviation value relative to the initial structure suggests a substantial enrichment of the structural diversity for the hIAPP 20-29 octamer induced by Mel. The PDF of the peptide end-to-end distance in Figure 3b shows that the peptide chains become less extended in the company of Mel. The hydrogen bond (H-bond) number of peptides ( Figure 3c,d) shows that, with the addition of Mel, the number of H-bonds between the main chains is greatly reduced and that of the side chains also has a small reduction. This indicates that Mel effectively blocks the formation of H-bonds between peptide backbones, unconducive to the formation of an on-pathway hIAPP oligomer. It also provides an explanation for the hIAPP 20-29 β-sheet inhibition of Mel, as it is able to alter the backbone dihedral angle and simultaneously hinder the formation of backbone H-bonds. Mel Reduces Hydrogen Bond Formation of hIAPP20-29 and Transforms Peptide Conformations into Less Compacted Oligomers The conformational changes under the influence of Mel were explored by calculating the probability density function (PDF) of the Cα-atom root mean square deviation (RMSD) of hIAPP20-29. As shown in Figure 3a, the RMSD of isolated hIAPP mainly covers the range of 1.2-1.7 nm, and with the addition of Mel, the range expands to 2.0-3.1 nm. The obvious increment of the RMSD range and deviation value relative to the initial structure suggests a substantial enrichment of the structural diversity for the hIAPP20-29 octamer induced by Mel. The PDF of the peptide end-to-end distance in Figure 3b shows that the peptide chains become less extended in the company of Mel. The hydrogen bond (H-bond) number of peptides (Figure 3c,d) shows that, with the addition of Mel, the number of H-bonds between the main chains is greatly reduced and that of the side chains also has a small reduction. This indicates that Mel effectively blocks the formation of H-bonds between peptide backbones, unconducive to the formation of an on-pathway hIAPP oligomer. It also provides an explanation for the hIAPP20-29 β-sheet inhibition of Mel, as it is able to alter the backbone dihedral angle and simultaneously hinder the formation of backbone H-bonds. The two-dimensional free energy landscape as a function of the solvent-accessible surface area (SASA) and the radius of gyration (RG) of hIAPP [20][21][22][23][24][25][26][27][28][29] in Figure 4a,b display an overall view of the influence of Mel molecules on the whole conformational space of the hIAPP [20][21][22][23][24][25][26][27][28][29] octamer. In the hIAPP 20-29 system, the vast majority of conformations are in the range of SASA = 45-70 nm 2 and RG = 1.1-1.6 nm, and there is only one major free energy basin. In the hIAPP 20-29 + Mel system, the free energy surface becomes much broader, with the range of SASA expanded to 58-95 nm 2 and that of RG to 1.2-2.2 nm. There are also several local minimum energy basins within the free energy landscape. It reveals that Mel exposes more residues to the aqueous environment and makes the hIAPP 20-29 aggregates less compact. hIAPP20-29 aggregates less compact. The sampled conformations were further clustered, ranked, and projected onto the free energy landscape. Using a Cα-RMSD cutoff of 0.45 nm, the hIAPP20-29 octamers in the hIAPP20-29 and hIAPP20-29 + Mel systems were, respectively, separated into 295 and 584 clusters. The top ten most-populated clusters and corresponding proportions are presented in Figure 4c,d, and they constitute 40% (hIAPP20-29) and 28% (hIAPP20-29 + Mel) of the total conformations in their respective systems. In the absence of Mel, the hIAPP20-29 octamers primarily adopt β-sheet-rich structures, and β-sheets consisting of three or four β-strands in rows were observed in most clusters. The top ten clusters are relatively concentrated within the minimum energy basin. In the presence of Mel, hIAPP20-29 exhibits more components of a random coil, and very few β-sheet structures are detected. The distribution of the top ten clusters on the free energy landscape also becomes more discrete. Previous studies have proved the essential role of β-sheet propensity in the hIAPP20-29 region for hIAPP aggregation and cytotoxic oligomer formation [16,20,21]. Our results indicate that Mel can greatly increase the structural diversity of hIAPP20-29 oligomers and significantly reduce the β-sheet content to disordered conformations, which are supposed to go against amyloid aggregation and cytotoxicity. Peptide-Mel Interaction Analysis Indicates That Hydrogen Bonding, π-π, and CH-π Interactions Play Important Roles in Amyloid Inhibition The effects of Mel on the hIAPP20-29 interactions are identified in Figure 5 by calculating the main chain-main chain (MC-MC) and side chain-side chain (SC-SC) contact probability between interpeptide pairwise residues. In the hIAPP20-29 system, the main The sampled conformations were further clustered, ranked, and projected onto the free energy landscape. Using a Cα-RMSD cutoff of 0.45 nm, the hIAPP 20-29 octamers in the hIAPP [20][21][22][23][24][25][26][27][28][29] and hIAPP 20-29 + Mel systems were, respectively, separated into 295 and 584 clusters. The top ten most-populated clusters and corresponding proportions are presented in Figure 4c,d, and they constitute 40% (hIAPP [20][21][22][23][24][25][26][27][28][29] ) and 28% (hIAPP 20-29 + Mel) of the total conformations in their respective systems. In the absence of Mel, the hIAPP 20-29 octamers primarily adopt β-sheet-rich structures, and β-sheets consisting of three or four βstrands in rows were observed in most clusters. The top ten clusters are relatively concentrated within the minimum energy basin. In the presence of Mel, hIAPP 20-29 exhibits more components of a random coil, and very few β-sheet structures are detected. The distribution of the top ten clusters on the free energy landscape also becomes more discrete. Previous studies have proved the essential role of β-sheet propensity in the hIAPP 20-29 region for hIAPP aggregation and cytotoxic oligomer formation [16,20,21]. Our results indicate that Mel can greatly increase the structural diversity of hIAPP 20-29 oligomers and significantly reduce the β-sheet content to disordered conformations, which are supposed to go against amyloid aggregation and cytotoxicity. Peptide-Mel Interaction Analysis Indicates That Hydrogen Bonding, π-π, and CH-π Interactions Play Important Roles in Amyloid Inhibition The effects of Mel on the hIAPP [20][21][22][23][24][25][26][27][28][29] interactions are identified in Figure 5 by calculating the main chain-main chain (MC-MC) and side chain-side chain (SC-SC) contact probability between interpeptide pairwise residues. In the hIAPP 20-29 system, the main chains are predominantly arranged in parallel. The side chain interaction pattern indicates that there is a very strong aromatic stacking between the F23-F23 pair and a strong hydrophobic interaction between I26-I26 and L27-L27. Thus, these paired anchors determine the parallel arrangement of the peptide chains. In addition, the terminal hydrophilic residues Ser and Asn form hydrogen bonds with each other, and F23 forms a CH-π in-teraction with I26 and L27, which helps to further stabilize the oligomer structure. In the hIAPP 20-29 + Mel system, the main chains keep a parallel alignment, but the contact probability has a remarkable reduction globally due to the backbone H-bonds reduction by Mel. The side chain interaction pattern shows that the F23-F23 aromatic stacking and hydrophobic interactions between I26-I26 and L27-L27 are significantly weakened. The side chain interactions between hydrophilic residues are also slightly reduced. The aromatic stacking involving F23 and hydrophobic interactions involving L27 were reported to be critical in hIAPP aggregation and the maintenance of structural stability for hIAPP fibrils [30,[45][46][47]. Our results further support these points and show that Mel can greatly interfere with the aromatic stacking and hydrophobic interactions in hIAPP 20-29 association, which destabilizes and remodels the oligomers to facilitate amyloid inhibition. cates that there is a very strong aromatic stacking between the F23-F23 pair and hydrophobic interaction between I26-I26 and L27-L27. Thus, these paired an termine the parallel arrangement of the peptide chains. In addition, the termin philic residues Ser and Asn form hydrogen bonds with each other, and F23 for π interaction with I26 and L27, which helps to further stabilize the oligomer str the hIAPP20-29 + Mel system, the main chains keep a parallel alignment, but th probability has a remarkable reduction globally due to the backbone H-bonds by Mel. The side chain interaction pattern shows that the F23-F23 aromatic sta hydrophobic interactions between I26-I26 and L27-L27 are significantly weak side chain interactions between hydrophilic residues are also slightly reduced matic stacking involving F23 and hydrophobic interactions involving L27 were to be critical in hIAPP aggregation and the maintenance of structural stability f fibrils [30,[45][46][47]. Our results further support these points and show that Mel ca interfere with the aromatic stacking and hydrophobic interactions in hIAPP20tion, which destabilizes and remodels the oligomers to facilitate amyloid inhibi The interactions between the hIAPP20-29 octamer and Mel are analyzed in Mel preferentially binds to aromatic F23, followed by hydrophobic I26 and L2 has a similar binding affinity to the rest of the residues. Our previous study sho the binding of Mel to a tau protein is dominated by H-bonding between Me peptide backbone and is synergistically aided by other interactions [48]. Here, ing of Mel to the hIAPP20-29 octamer exhibits a similar behavior. Mel forms H-bo each residue backbone almost uniformly, and the binding to the F23, I26, and dues is further strengthened by aromatic stacking and CH-π interactions, res Mel forms hydrogen bonds with side chains of the terminal hydrophilic residue Asn, while the binding probability of Mel to these residues does not increase side chain interplay between these residues is hardly affected (see Figures 3d Hence, we conclude that the binding of Mel to the hIAPP20-29 octamer is dom H-bonding between Mel and the peptide backbone, which is strengthened by stacking and CH-π interactions and is less affected by side chain H-bonding ment. The interactions between the hIAPP 20-29 octamer and Mel are analyzed in Figure 6. Mel preferentially binds to aromatic F23, followed by hydrophobic I26 and L27, while it has a similar binding affinity to the rest of the residues. Our previous study showed that the binding of Mel to a tau protein is dominated by H-bonding between Mel and the peptide backbone and is synergistically aided by other interactions [48]. Here, the binding of Mel to the hIAPP 20-29 octamer exhibits a similar behavior. Mel forms H-bonds with each residue backbone almost uniformly, and the binding to the F23, I26, and L27 residues is further strengthened by aromatic stacking and CH-π interactions, respectively. Mel forms hydrogen bonds with side chains of the terminal hydrophilic residues Ser and Asn, while the binding probability of Mel to these residues does not increase, and the side chain interplay between these residues is hardly affected (see Figures 3d and 5b). Hence, we conclude that the binding of Mel to the hIAPP 20-29 octamer is dominated by H-bonding between Mel and the peptide backbone, which is strengthened by aromatic stacking and CH-π interactions and is less affected by side chain H-bonding reinforcement. lein but phenolsulfonphthalein was effective in inhibiting the fibril formation of hIAPP20-29, although the two compounds have similar structures and can both act as a donor or an acceptor of H-bond formation [49]. These reveal that, even for the same intrinsically disordered protein, small molecules with similar structures may have distinct inhibitory effects, which highly rely on the outcome of the specific binding of small molecules to a protein sequence. The aromatic stacking in the hIAPP-Mel interaction was further examined in the PMF (in kcal/mol) as a function of the angle and centroid distance between the closest benzene rings of F23 and Mel. The basin center is located at about (60, 0.48), indicating the stacking pairwise rings of F23 and Mel mainly adopt a herringbone alignment. A representative snapshot displays a typical π-π stacking with a herringbone alignment formed between F23 and Mel when the centroid distance of the pairwise rings is about 5.02 Å in parallel. This aromatic stacking between the benzene rings of F23 and Mel competes and disturbs the aromatic stacking between F23 residues. The PDF of the centroid distance (dπ-π) between the pairwise rings of the F23 residues without/with Mel shows that the peak position increases from 0.48 to 0.58 nm in the presence of Mel rather than isolated in the hIAPP20-29 system, and the peak value drops by almost half. A previous nuclear magnetic resonance (NMR) study on tea catechins and hIAPP22−27 and MD study on flavonoids and hIAPP20-29 proved that aromatic stacking is vital in binding and inhibition [45,47]. Considering the critical role of F23 in the fibril formation and amyloid Note that, for small molecules, there is no necessary connection between the ability to form H-bonds and prevent fibril formation. Small molecules that form abundant H-bonds may have no or negligible inhibitory effects on fibril formation. Kamihira-Ishijima et al. studied the influence of tea catechins on the amyloid fibril formation of hIAPP [22][23][24][25][26][27] [47]. Although both EC and ECg formed abundant H-bonds with hIAPP 22-27 , only the ECg was able to inhibit fibrillization. Levy et al. found that not phenolphthalein but phenolsulfonphthalein was effective in inhibiting the fibril formation of hIAPP [20][21][22][23][24][25][26][27][28][29] , although the two compounds have similar structures and can both act as a donor or an acceptor of H-bond formation [49]. These reveal that, even for the same intrinsically disordered protein, small molecules with similar structures may have distinct inhibitory effects, which highly rely on the outcome of the specific binding of small molecules to a protein sequence. The aromatic stacking in the hIAPP-Mel interaction was further examined in the PMF (in kcal/mol) as a function of the angle and centroid distance between the closest benzene rings of F23 and Mel. The basin center is located at about (60, 0.48), indicating the stacking pairwise rings of F23 and Mel mainly adopt a herringbone alignment. A representative snapshot displays a typical π-π stacking with a herringbone alignment formed between F23 and Mel when the centroid distance of the pairwise rings is about 5.02 Å in parallel. This aromatic stacking between the benzene rings of F23 and Mel competes and disturbs the aromatic stacking between F23 residues. The PDF of the centroid distance (d π-π ) between the pairwise rings of the F23 residues without/with Mel shows that the peak position increases from 0.48 to 0.58 nm in the presence of Mel rather than isolated in the hIAPP 20-29 system, and the peak value drops by almost half. A previous nuclear magnetic resonance (NMR) study on tea catechins and hIAPP 22−27 and MD study on flavonoids and hIAPP [20][21][22][23][24][25][26][27][28][29] proved that aromatic stacking is vital in binding and inhibition [45,47]. Considering the critical role of F23 in the fibril formation and amyloid inhibition of hIAPP [22,30,45], the disruption of the F23-F23 aromatic stacking interaction induced by Mel is supposed to greatly prevent hIAPP aggregation. The CH-π interaction was estimated by calculating the PDF of the minimum distance (d CH-π ) between the methyl group of I26/L27 and the benzene ring of Mel. The peaks of the PDF curves are, respectively, located at 0.36 and 0.35 nm for I26-Mel and L27-Mel, indicating an intense CH-π interaction of Mel with the hydrophobic I26 and L27. The snapshots display that an intense CH-π stacking is formed when the centroid distance between the methyl group of I26 (L27) and the benzene ring of Mel is about 4.05 Å (3.87 Å). The CH-π interaction between Ile/Leu and the aromatic residues was reported to be important for protein structural stability [50,51]. The PDF of d π-π between the methyl group of I26/L27 and the benzene ring of F23 without/with Mel shows that the peak value becomes lower, and the distribution shifts to the right with the addition of Mel. The increased d π-π indicates that Mel greatly weakens the intensity of the CH-π interaction between peptides. It may be attributed to the smaller steric occupation of Mel compared with F23, which makes its benzene ring interplay with the methyl groups of I26/L27 more flexibly and efficiently. A previous NMR spectrum study showed that phenolsulfonphthalein exerts a potent inhibitory effect on fibril formation by hIAPP [20][21][22][23][24][25][26][27][28][29] and strongly binds to I26, while phenolphthalein sharing the same chemical shift deviation resulting from the binding to F23 displays low anti-amyloidogenic activity [49]. Here, the competition of Mel with the CH-π interaction between I26/L27 and F23 further destroys the peptide associations in hIAPP 20-29 oligomerization. Mel Has a Slight Influence on the Structural Stability of Preformed hIAPP 1-37 Fibrillar Octamer and Has a High Binding Affinity to the Amyloidogenic Region In order to investigate the effect of Mel on the preformed hIAPP 1-37 fibril and to identify the binding affinity of Mel to the full-length hIAPP 1-37 fibril, we performed conventional MD simulations on the hIAPP 1-37 and hIAPP 1-37 + Mel systems. As shown in Figure 7, the average time evolution of Cα-atom RMSD and the populations of different secondary structures in the absence and presence of Mel indicate that Mel displays a slight influence on the tertiary and secondary structures of preformed hIAPP 1-37 fibrillar octamers. The binding probability of Mel to individual residues shows that Mel prefers to bind to three sites: residues 10QRLANFL16 in the N-terminal region, residues 22NFGAIL27 in the amyloidogenic region, and residues 34SNTY37 in the C-terminal region. The residues F23, Y37, Q10, L12, and N14 have the highest binding probabilities of 7.6%, 6.4%, 5.8%, 5.5%, and 5.4%, respectively, reflecting the important roles of aromatic stacking and H-bonding interactions in Mel binding. Interesting, F23 is consistently identified as the specific binding site in the studies of chloride, C 60 (OH) 24 , and dopamine interacting with full-length hIAPP protofibrils, and all the three inhibitors preferentially bind to the amyloidogenic region of hIAPP [30,46,52]. Note that the binding behavior of Mel to the amyloidogenic region 20-29 of preformed full-length fibrils is different from that interacting with the hIAPP [20][21][22][23][24][25][26][27][28][29] octamer, because the hydrophobicity and curvature of specific regions on the fibril surface can greatly affect the binding affinity. The number of H-bonds between individual residues and Mel shows that Mel mainly forms H-bonds with the residues N35, Q10, N22, S34, and N14. Our results indicate that Mel has a high binding affinity to the amyloidogenic region of the hIAPP 1-37 fibrillar octamer. We believe that our computational findings of the influence of Mel on the hIAPP secondary structure and oligomeric morphology, as well as the binding sites of Mel to IAPP will further inspire the experimental exploration of Mel inhibiting hIAPP in vitro. Additional experimental studies using the circular dichroism spectrum, NMR, cryo-electron microscopy, gel electrophoresis, etc. are expected to further confirm our simulation results. Still, there is a huge gap in the experimental observations in vitro or in animal models to clinical trials. The failure of the anti-AD drug Bapineuzumab in two phase III clinical trials has fully demonstrated the complexity of the amyloid inhibition mechanism. Modeling hIAPP and Melatonin The amino acid sequence of hIAPP is KCNTATCATQ 10 RLANFLVHSS 20 NNFGAILS ST 30 NVGSNTY. The hIAPP1-37 fibrillar octamer was modeled based on a previous solid-state NMR study [53]. One single hIAPP1-37 peptide consists of the N-terminal region (residues 1−19), the amyloidogenic region (residues 20−29), and the C-terminal region (residues 30−37). The N-terminus was capped by NH3+, and the C-terminus was amidated in accordance with the experiment [53]. The hIAPP20-29 peptide was capped by CH3CO at the N-terminus and by NH2 at the C-terminus. The initial structure of the Mel molecule was taken from the ChemSpider database (ID = 872), shown in Figure 1b. The topology of Mel was generated by the GlycoBioChem PRODRG2 Server [54]. The structure of Mel was optimized with Spartan'10 first [55] and was energy-minimized by GAMESS [56]. The hIAPP20-29 + Mel system consists of eight hIAPP20-29 peptides randomly placed in the simulation box and Mel molecules placed 2.0 nm (minimum distance) away from hIAPP (see Figure 1). The simulation box is filled with TIP3P water [57], and NaCl was added to neutralize the system and to provide a salt concentration of 0.1 M. The hIAPP1-37 + Mel system consists of the hIAPP1-37 fibrillar octamer and 32 Mel molecules. The systems of the isolated hIAPP20-29 octamer and hIAPP1-37 fibrillar octamer in water were run as control groups. The details of the simulated systems are listed in Table S1. Simulation Details All-atom MD simulations were performed in an isothermal−isobaric (NPT) ensemble using Gromacs-2018.4 software [58]. Given that characterizing the structure and the oligomerization of hIAPP is very complex, the simulation results are highly dependent on the initial structure, the force field, and the method employed [59]. Widely used and recommended in previous studies [60][61][62], we employed the AMBER99SB-ILDN force field [63] to simulate our systems. Periodic boundary conditions were applied in all three directions. The pressures and temperatures of the systems were coupled using the Pari- Simulation Details All-atom MD simulations were performed in an isothermal−isobaric (NPT) ensemble using Gromacs-2018.4 software [58]. Given that characterizing the structure and the oligomerization of hIAPP is very complex, the simulation results are highly dependent on the initial structure, the force field, and the method employed [59]. Widely used and recommended in previous studies [60][61][62], we employed the AMBER99SB-ILDN force field [63] to simulate our systems. Periodic boundary conditions were applied in all three directions. The pressures and temperatures of the systems were coupled using the Parinello-Rahman algorithm [64,65] (1 bar, τ P = 1.0 ps) and velocity rescaling [66] (τ T = 0.2 ps), respectively. The time step of the simulation was 2 fs, and all bonds were constrained by the LINCS algorithm [67]. The van der Waals interaction was calculated using a cutoff of 1.0 nm, and the electrostatic interaction was treated by means of the PME method [68], with a real space cutoff of 1.0 nm. The simulation parameter settings applied in our work are consistent with previous studies [46,52]. Analysis Methods A trajectory analysis was performed using in-house developed codes and GROMACS toolkits. The DSSP program [69] was be used to calculate the secondary structure. The Daura cluster analysis method [70] was used to cluster the conformations sampled in the REMD simulations with a Cα-RMSD cutoff of 0.45 nm. PMF was calculated using the relation −RTlnH(x, y), where H(x, y) is the histogram of two selected reaction coordinates SASA and RG of the hIAPP [20][21][22][23][24][25][26][27][28][29] peptides. An atomic contact is defined when two nonhydrogen atoms come within 0.54 nm. An H-bond is defined as formed when the distance between donor D and acceptor A is less than 0.35 nm and the D-H-A angle is larger than 150 • . A water probe radius of 0.14 nm was used to calculate the SASA. Conclusions In this work, REMD simulations were performed to investigate the inhibitory effect of Mel on hIAPP oligomerization by using the hIAPP 20-29 octamer as the templates. We found that Mel can significantly reduce β-sheet and backbone H-bond formation, and the hIAPP 20-29 octamer is transformed into less compact conformations with the disordered components increased. A detailed interaction analysis indicated that the binding behavior of Mel is dominated by H-bonding between Mel and the hIAPP 20-29 backbone and strengthened by aromatic stacking and CH-π interactions between Mel and the peptide side chains. The hIAPP-Mel interaction greatly interferes with the hIAPP 20-29 association and destabilizes and remodels the oligomers, which is supposed to prevent amyloid aggregation and cytotoxicity. The influence and binding affinity of Mel on the preformed hIAPP 1-37 fibrillar octamer were further examined by performing conventional MD simulations. The results show that Mel prefers to interact with the amyloidogenic region of preformed the hIAPP 1-37 fibrillar octamer, whereas it has a slight influence on the structural stability. Our findings illustrate a possible pathway by which Mel alleviates diabetes symptoms from the perspective of Mel inhibiting amyloid deposits. This work reveals the inhibitory mechanism of Mel against hIAPP 20-29 oligomerization, which facilitates the development of efficient anti-amyloid agents and provides inspiration for a novel cellular approach in amyloid detection and inhibition. Institutional Review Board Statement: This statement can be excluded as the study did not require ethical approval. Informed Consent Statement: This statement can be excluded as the study did not involve humans. Data Availability Statement: The datasets generated during and/or analyzed during the current study are available from the corresponding author upon reasonable request.
2022-09-09T15:52:16.371Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "70a97a66cb178db63a61b1bfd2b538cbd19bdec2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/23/18/10264/pdf?version=1662473863", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0d51c7f37c5de916a6aebc9898b48b674ab3d040", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
17195558
pes2o/s2orc
v3-fos-license
Sitagliptin ameliorates the progression of atherosclerosis via down regulation of the inflammatory and oxidative pathways Back ground: Atherosclerosis is the major cause of death. The most common risk factors are hyperlipidemia, diabetes, and other factors like chronic infection and inflammation. Objective: This study was undertaken to assess the effect of sitagliptin on atherosclerosis via interfering with inflammatory and oxidative pathways. Materials and Methods: A total of 18 local domestic male rabbits were included in this study. The animals were randomly divided into three groups (6 rabbits in each group): Group I normal were fed with chow (oxiod) diet for 12 weeks. Group II were fed with 1% cholesterol enriched diet for 12 weeks. Group III rabbits fed with cholesterol enriched diet for 6 weeks, and then continued on cholesterol enriched diet and treated with sitagliptin 125 mg/kg/day orally for the next 6 weeks. Blood samples were collected at the start of the study, at 6 weeks of the study and then at the end of treatment to measure serum lipids profile, hsCRP and TNFα. At end of the study, the aorta was removed for measurement of MDA, glutathione and, aortic intima-media thickness. Results: Sitagliptin results in a significant reduction (p < 0.05) in serum level of total cholesterol (TC), triglycerides (TG), high sensitive C-reactive protein (hsCRP) and TNFα with a significant increase (p < 0.05) in serum HDL level. There was a significant reduction (p < 0.05) in aortic MDA, in comparison to the untreated control group. Furthermore, sitagliptin causes significant increment (p < 0.05) in aortic GSH in comparison to induced untreated group. Regarding histopathological results, sitagliptin results in a significant reduction (p < 0.05) in atherosclerotic lesions in comparison to the induced untreated group and significant reduction in aortic intima-media thickness (p < 0.05). Conclusion: Sitagliptin reduced atherosclerosis progression in hyperlipidemic rabbit via its effect on lipid parameters and interfering with inflammatory and oxidative stress. Introduction The endothelium has autocrine and paracrine components that regulate anti-inflammatory, mitogenic, and contractile activities of the vessel wall as well as the homeostatic process within the vessel lumen. Atherosclerosis is likely initiated when endothelial cells over-express adhesion molecules in response to endothelial injury secondary to turbulent flow. Increased cellular adhesion and associated endothelial dysfunction calls for the recruitment of inflammatory cells, release of cytokines, and deposition of lipids into the atherosclerotic plaque. Atherothrombosis is mediated, in large part, by the inflammatory cascade. 1 Enrolled macrophages both release additional cytokines and begin to migrate through the endothelial surface into media of the vessel. This process is further enhanced by the local release of monocyte-colony stimulating factor (M-CSF), which causes monocytic proliferation; local activation of monocytes leads to both cytokine-mediated progression of atherosclerosis and oxidation of low-density lipoprotein (LDL). Once initiated, many mediators of inflammation have been described to influence the development of the atherosclerotic plaque. 2 Inflammatory mediators expressed by smooth cells within the atherosclerotic plaque include interleukin (IL)-1β, tumor necrosis factor (TNF)-α and β, IL-6, M-CSF, monocyte chemotactic protein-1 (MCP-1), and IL-18. The impact of these mediators is various and includes mitogenesis, intracellular matrix proliferation, and angiogenesis and foam cell development. 3 Gliptins are an innovative class of oral anti-diabetic agents that enhance and prolong the physiological actions of incretin hormones that increase insulin secretion. Sitagliptin is an orally available dipeptidyl peptidase-IV inhibitor (DPPI) developed to be used as a once-daily treatment for type 2 diabetes mellitus and has shown beneficial effects on glycemic control, reducing Hb A1c and preventing hypoglycemia, as well as on islet mass and function with no significant adverse effects. 4,5 In addition to glucose control by insulin and glucagon secretion, incretins improve peripheral insulin sensitization, cardiac and neuronal protection and beta-cell preservation. The use of an incretin enhancer (such as sitagliptin) might present beneficial effects on diabetes pathophysiology and on prevention of its serious complications like atherosclerosis. Animals A total of 18 local domestic male rabbits were included in this study. The animals were randomly divided into three groups (6 rabbits in each group). Group I rabbits fed normal chow (oxiod) diet for 12 weeks. Group II rabbits fed 1% cholesterol-enriched diet for 12 weeks. Group III rabbits fed with cholesterol-enriched diet for 6 weeks, and then continued on cholesterol-enriched diet and treated with sitagliptin 125 mg/ kg/day orally for the next 6 weeks. Blood samples were collected at the start of the study, at 6 weeks of the study, and then at the end of treatment course for measurement of serum lipid profile (total cholesterol (TC), triglycerides (TG), highdensity lipoprotein (HDL)), high-sensitivity C-reactive protein (hsCRP), and TNF-α). At the end of the study, the aorta were removed for measurement of aortic malondialdehyde (MDA), glutathione (GSH) and aortic intima-media thickness, and for sectioning for histopathology. Preparation of samples. From each rabbit about 3 mL of blood was collected from the central ear artery without use of heparin after an overnight fasting. The blood sampling was done first at the start of the study, that is, at time 0 and after 6 weeks of the induction period, and then at end of the treatment course (12 weeks). The blood samples were allowed to clot at 37°C and centrifuged at 3000 r/min for 15 min. Sera were taken and analyzed for determination of serum TC, TG, HDL-cholesterol (HDL-C), hsCRP, and TNF-α. Tissue preparation for oxidative stress measurement. An amount of equal to 20% homogenates of tissues was prepared in phosphate buffer at pH 7.5 containing 1 mmol/L sodium-EDTA. The homogenates were centrifuged at 20,000×g at 4°C for 30 min and the supernatants were used for biochemical measurements of GSH and MDA levels. Histopathological procedure. Autopsy of aortic (abdominal and thoracic aorta) sectioning was done at the end of the study (after 12 weeks); the histopathology was used to confirm the anti-atherogenic effect of sitagliptin in comparison with the control groups (normal control and induced untreated control). The sections were examined by microscope under magnification power of 4×, 10×, and 40×, and the histological changes were determined according to the American Heart Association classification of atherosclerosis, 6 which divides atherosclerotic lesions into six types as follows: Type I (initial) lesion: isolated macrophage foam cells; Type II (fatty streak) lesion: mainly intracellular lipid accumulation; Type III (intermediate) lesion: type II changes and small extracellular lipid pools; Type IV (atheroma) lesion: type II changes and core of extracellular lipid; Type V (fibro-atheroma) lesion: lipid core and fibrotic layer or multiple lipid cores and fibrotic layers; Type VI (complicated) lesion: complicated fibro-atheroma with hemorrhage or thrombus. Statistical analysis Data were expressed as mean ± standard error of the mean (SEM); by using SPSS version 17, unpaired t-test was used to compare the mean values between different groups. Effect of sitagliptin on serum lipid profile At end of the first 6 weeks, all animals that had been fed with high cholesterol diet showed significant increase in lipid profile. But at the end of the 12 weeks, sitagliptin-treated group showed significant reduction in serum lipids in comparison to untreated group (see Table 1). Effect on aortic tissue reduced GSH level and MDA At the end of study, after 12 weeks of high cholesterol diet, the aortic GSH level was significantly decreased in the induced untreated group (II) and there was a significant increment in MDA level (p < 0.05) in comparison with the normal control group. For the sitagliptin-treated group (III), after 12 weeks of high cholesterol diet, there was a significant increment in the GSH level (p < 0.05) associated with significant decrement in MDA level (p < 0.05; Table 2). Effect of sitagliptin on TNF-α and hsCRP Before the study, the baseline levels of serum hsCRP and TNF-α were statistically not significant among all groups. After 6 weeks of high cholesterol diet, the TNF-α and hsCRP levels significantly increased (p < 0.05) in all groups except the normal group. After 12 weeks, the hsCRP and TNF-α levels significantly decreased in the sitagliptin-treated group (p < 0.05) as compared with the induced untreated group ( Table 3). Effect of sitagliptin on atherosclerosis and aortic intima-media thickness At the end of 12 weeks of high cholesterol diet, rabbits treated with sitagliptin had a significant reduction in the severity of atherosclerotic lesions in comparison with rabbits in the induced untreated group (Figure 1(b) and (c)). The level of aortic intima-media thickness (measured by histomorphometry) was significantly increased in the induced untreated group (II) in comparison with normal control (p < 0.05) as shown in Figure 1(a) and (b). The aortic intima-media thickness level of sitagliptintreated group (III) was significantly lower than that of the induced untreated group (II) as shown in Figure 1 and Table 4. Discussion In this study, we demonstrate that high atherogenic diet causes significant increment in lipid parameter 7,8 (TC, TG, and atherogenic index) in comparison with the control group. Treatment with sitagliptin causes significant reduction in (TC, TG, and atherogenic index) in comparison with the induced untreated group. This result is consistent with those reported by Matikanianin et al. 9 In this study, sitagliptin treatment significantly reduced the elevation of inflammatory markers (hsCRP and TNF-α) in atherosclerosis model of hypercholesterolemic rabbit, 10,11 suggesting that sitagliptin inhibits vascular inflammation induced by high atherogenic diet. These results are consistent with those reported by Ferreira et al. 12 In our study, atherosclerosis was associated with an increase in the levels of the lipid peroxidation product MDA and a decrease in the level of GSH in aortic tissue, suggesting an increase in the levels or activity of oxygen radicals. MDA and GSH have been considered as specific indicators of oxidative status. 13 MDA level is widely utilized as a marker of lipid peroxidation and its measurement gives a direct evidence for LDL oxidation and is important in predicting free radicalinduced injury. Therefore, the observed elevation in tissue MDA may be attributed to hyperlipidemia that enhances the processes of lipid peroxidation. Hypercholesterolemia could increase the levels of reactive oxygen species (ROS) through stimulation of polymorph-nuclear leukocytes (PMNLs) and dysfunction of endothelial cells. 14,15 Furthermore, hypercholesterolemia, especially if prolonged, results in vascular oxidant burden, 16,17 which could favor GSH depletion because of enhanced oxidation of the tripeptide or its consumption by electrophilic compounds like lipoperoxidation aldehydes. 18,19 Sitagliptin treatment had significantly reduced aortic MDA level, suggesting decrease in ROS and subsequent lipid peroxidation. Also sitagliptin had a significant effect on aortic GSH levels where it prevents GSH depletion in hypercholesterolemic rabbit, and thus maintains antioxidant reserve which is important for vascular protection against lipid peroxide. 12 In rabbits treated with sitagliptin, there was a significant reduction in the severity of atherosclerotic lesions in comparison with rabbits in the induced untreated group. Also there is significant decrement in aortic intima-media thickness (p < 0.05) in the sitagliptin-treated group compared with that of the induced untreated group. In our study, we found that sitagliptin exert anti-inflammatory effect by reducing hsCRP and TNF-α and had antioxidant effect by reducing lipid peroxide (MDA) and enhancing GSH. Sitagliptin has potent and rapid anti-inflammatory effect by which it may reduce atherosclerosis. 20 Our findings are consistent with those of Barbieri et al. 21 who found that dipeptidyl peptidase inhibitors decrease inflammation and oxidative stress in diabetic patients and therefore decrease progression of atherosclerosis. Our results may provide answers about how sitagliptin reduces aortic intima-media thickness and atherosclerosis by suppression of systemic inflammatory response and oxidative stress. Conclusion The results of this study reveal that sitagliptin reduces atherosclerosis progression in experimentally induced atherosclerosis by interfering with inflammatory and oxidative pathways. Declaration of conflicting interests The authors declare that there is no conflict of interest. SEM: standard error of the mean. *p < 0.05 (means at end of the study for the induced untreated group); † p < 0.05 (means at end of the study for the sitagliptin-treated group).
2016-05-12T22:15:10.714Z
2013-09-01T00:00:00.000
{ "year": 2013, "sha1": "6e754c1b758b96949fe13aff6f17e520f8dee7dd", "oa_license": "CCBYNC", "oa_url": "http://journals.sagepub.com/doi/pdf/10.1177/2050312113499912", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6e754c1b758b96949fe13aff6f17e520f8dee7dd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
35637395
pes2o/s2orc
v3-fos-license
Low reliability of home-based diagnosis of malaria in a rural community in western Kenya Introduction: Home-based management of malaria is promoted as a major strategy for improving prompt delivery of effective malaria treatment in Africa. This study aimed to determine the proportion of children who tested positive for malaria with routine light microscopy among those whose mothers had made a home-based diagnosis in a rural community in Western Kenya. Methodology: This cross-sectional study was conducted at Bokoli location, Bungoma East District in November and December 2007. Mothers of children five years of age or under with malaria diagnosed by their mothers were interviewed (n = 96). Duplicate blood smears were collected, stained by field stain A (Methylene blue, Azure) and B (Eosin), and examined for malaria parasites using light microscopy. Results: Only 30/96 (31.2%) specimens were positive for Plasmodium falciparum. Elevated temperature (70/96; 72.9%) in their children was the most commonly cited criterion for diagnosis of malaria by the mothers. In 57 of the 96 cases, information was given by the mothers regarding treatment during the current malaria episode; of these, 10 (17.5%) had received treatment for malaria, but six (60%) of these were parasite negative. This means that only 4/21 (19.0%) with positive smear microscopy received treatment. The most common anti-malaria drugs used were Fansidar (37.8%) and Metakelfin (29.7%). Conclusion: The difficulty of diagnosing malaria accurately at home increases the urgent need for improved diagnostic tools that can be used at the community level in poor populations. Intervention measures are needed to increase the treatment rate to reduce reservoirs and malaria parasite transmission. Introduction Malaria is one of the most severe public health problems.The World Health Organization recommends that anyone suspected of having malaria should receive diagnosis and treatment with an effective drug within 24 hours of the onset of symptoms [1]. Definitive diagnosis of malaria infection is still based on identifying plasmodia in blood films.In general, the screening of blood slides by microscopy is considered to be the gold standard.However, presumptive malaria treatment without laboratory diagnosis has been justified by the scarcity of clinical facilities and the high case fatality rate of malaria in high-prevalence areas.Severe malaria is associated with the delay of presentation at a health facility and late use of anti-malarial drugs [2,3].Consequently, home treatment is acceptable when the patient does not have access to a health-care provider within that time period, as is the case for most patients in malaria-endemic areas [4,5]. Recent studies emphasized the difficulty in making a presumptive diagnosis of malaria and highlight the urgent need for improved diagnostic tools that can be used at the community and primarycare levels, especially in resource-poor settings [6,7].Even though febrile illnesses are commonly treated at home, little attention is paid to the children's caretakers' diagnosis of malaria in the community against laboratory microscopy.The purpose of this study was to compare the results of routine malaria microscopy used at the health centre with the mothers' diagnosis of malaria in a rural community within Western Kenya. Study setting The study was conducted in Bokoli location, Webuye division of Bungoma East District, Western Kenya.The area is located approximately 100 km north of Lake Victoria.The study site, Bokoli sublocation, which is predominantly rural, has 15 villages and lies within a malaria-endemic region.There is only one government health centre.The annual temperatures range from 21 o C to 25 o C and rainfall ranges from 1,600 mm to 2,000 mm.According to the latest census of 1999, the area has a population of about 6,200 within an area of 15.4 km 2 .There are about 400 homesteads with 1,158 households.Approximately 2% (125) of the population is comprised of children five years of age or under.The majority of the residents practice subsistence farming; sugarcane is the main cash crop.According to the local health centre records, malaria is the main cause of patients presenting at the hospital, with a prevalence of approximately 30% by clinical diagnosis (according to Bokoli Health Centre medical records, 2006). Study design A community-based cross-sectional study was conducted in November and December 2007, using face-to-face interviewer-administered questionnaires for quantitative data collection. The estimated sample size, based on the assumption that the prevalence of malaria in the study area is approximately 30%, was 90 individuals.The study site was selected by purposive sampling because of the high malaria prevalence.From every consecutive household, study subjects were selected randomly from mothers of children five years of age or under with malaria.Selection of the respondents continued until 96 respondents were secured.The disease was classified as malaria based on selfdiagnosis according to the community members' own perception.Caretakers were interviewed regarding administered treatment. Data collection and analysis Mothers of children five years of age or under were interviewed by trained research assistants using a pre-tested semi-structured questionnaire to elicit responses regarding age, education level, malaria diagnosis and treatment.Mothers under 18 years of age and those who did not give consent to participate were not included in the study. Thick and thin blood smears were prepared on slides in duplicate from finger pricks of children who had malaria according to their mothers' perception and taken to the Bokoli Health Centre for malaria parasite microscopy diagnosis.Participants with smear-positive results were referred to the health centre for treatment. Data were entered into Statistical Package for Social Scientists (SPSS for Windows version 14, Washington, USA), checked for consistency, and analysed using descriptive statistics.Association between microscopy and home-based diagnosis was established using the chi-square test. Laboratory processing of blood smears For each study participant, two blood smears were prepared and transported to Bokoli Health Centre.The slides were stained at the health centre with field stain A (Methylene blue and Azure) and B (Eosin) according to routine procedures [8].Thin blood smears were fixed in methanol before staining.A trained technician examined the smears for the presence of malaria parasites and identified the species based on the appearance of trophozoites and gametocytes.All slides were counter-checked by the principal investigator. Ethical considerations The research was approved by Maseno University, School of Graduate Studies, which serves as the Institutional Review Board.Permission to conduct the research was sought from the Medical Officer of Health, Bungoma East District, and the Local Administration.Informed verbal consent was sought from all the mothers 18 years of age and above to include their children in the study.Participants with smear-positive results were referred to the health centre for specific treatment. Results Of the 96 children included, the mean age was 25.6 months (range = 1 to 60 months; median = 25 months).In only 30/96 (31.2%) of the specimens, malaria parasites were detected by slide microscopy.In all cases, the parasite species identified was P. falciparum. The ages of the children and respondents in comparison to the blood smear test results are shown in Table 1.There was a statistically non-significant trend for a decrease in malaria-positive cases by microscopy as the age of the children increased, after which it was reversed upward to stabilize at about 30% (p = 0.51).There was no significant relationship between the mothers' age or education level and malaria diagnosis (p = 0.58 and 0.46, respectively).The mothers' criteria for diagnosis of malaria in their children included, most commonly, elevated temperature (70/96; 72.9%).Other reasons for diagnosis given by 27.1% (26/96) of the mothers included loss of appetite, vomiting, crying, dullness, dizziness, diarrhoea and coughing. Out of the 96 mothers interviewed, 37 (38.5%)indicated they used anti-malaria drugs whenever they suspected malaria in their children.However, mothers provided information about specific treatment in the current malaria episode in only 57 of the 96 cases.Of these, 10 (17.5%) had received treatment for malaria, but interestingly, six (60%) of these were negative for slide smear microscopy.This means that 4/21 (19.0%) with positive smear microscopy received treatment, and 17/21 (81.0%) did not (p = 0.05). Discussion The current data show that mothers correctly diagnosed malaria in their children in only about onethird of the cases.In addition, specific treatment rates were extremely low.A review of records at the same health centre revealed that approximately 30% of patients diagnosed with clinical malaria by the health-care providers were slide smear-positive, meaning that the accuracy of the health-care providers' diagnosis is similar to that of the mothers.In fact, presumptive diagnosis has been previously demonstrated in many settings to be highly inaccurate [6,7,9,10].However, in many malaria-endemic countries, clinical diagnosis is more often the only determining factor for treatment, as laboratory techniques to confirm the clinical suspicion are considered to be expensive, labor-intensive, or not sensitive enough [11].Fever is the clinical hallmark of uncomplicated malaria [12,13], and empiric treatment of fever with antimalarials is widely advocated and practiced in Africa. On the other hand, the potential benefits of malaria microscopy are currently not realized because of the poor quality of routine testing [6,14].For example, Giemsa stain microscopy in selected district health laboratories in Kenya had a low sensitivity and specificity -69% and 62% respectively [14].Other studies elsewhere have shown that Giemsa stain is more sensitive than field stain A and B, with suboptimal performance being attributed to the high workload and poor supervision of laboratory technicians [15].However, Giemsa field stain is currently being used widely in Kenya as the sole diagnostic test for malaria and thus may miss a considerable number of cases, leading to misdiagnosis and inappropriate treatment decisions.Clearly, more sensitive and easy-to-use tests of low cost are necessary to ascertain the actual prevalence of malaria in the study area [11,16]. Ideally, all persons who are sick with malaria should be treated promptly with effective antimalarials [17].Apart from alleviating suffering, treatment eliminates essential components of the parasite cycle, thus interrupting transmission.Early initiation of malaria treatment largely depends on good laboratory-confirmed diagnosis and access to health care.In this study, many (80%) cases with smear-positive microscopy tests had not received any treatment.In Uganda, 96.2% of patients with a routine positive slide result, and 47.6% of those with a negative result, were treated for malaria [6].Misdiagnosis of malaria contributes to a vicious cycle of increasing ill health and deepening poverty.In the light of the changing drug policies of many African countries, including Kenya, where the expensive artemisinin combination therapy drugs are prescribed as first-line treatment, a good laboratory confirmation will also have its impact on economics [18,19].Rapid diagnostic testing (RDT) is a valuable tool for diagnosis and can shorten the interval for starting treatment, particularly where microscopy may not be feasible due to resource and distance limitations [11,20].Molecular tests are more sensitive but expensive and difficult to implement in rural areas.Much better direct evidence is needed about why and how misdiagnosis affects the disease outcome among the poor and vulnerable. In the current study, the quality of home-based diagnosis compared with slide microscopy decreased with the age of children up to 36 months, and then it was reversed.It is not clear if this diagnosis was based on prior experience by the mothers.Although home-based diagnosis was independent of maternal age and level of education, the latter may improve care by increasing the likelihood of seeking treatment from a health facility early for better management [21]. Following recognition that Sulphadoxine/sulphalene-pyrimethamine (SP) was failing, there was a rapid technical appraisal of available data.Replacement options resulted in a decision in 2004 to adopt artemether-lumefantrine (AL) as the recommended first-line therapy in Kenya [22,23].Despite this, in the current study, the most common drug used was Fansidar. Although the home management of malaria has been shown to be an effective strategy for reducing childhood mortality from malaria, there is still a further need to educate and train not only the mothers and caretakers, but also the health-care professionals [10].The common practice of prescribing antimalarials for all episodes of fever in regions where malaria is endemic is likely to lead to both overtreatment of malaria and underdiagnosis of other treatable causes of fever [24,25].A study in Mali demonstrated that for children aged 0-5 years in a high-transmission area of sub-Saharan Africa, the use of RDT was not cost-effective, as compared to presumptive treatment of malaria with an artemisinin-combination therapy [26].However, the hidden cost of drug resistance associated with inappropriate treatment may be substantial.The major risk of a presumptive treatment strategy is an increase in drug resistance, but a benefit may be a reduction in malaria rates as evident in other parts of Africa [13]. We conclude that further research is required to guide treatment decisions.Meanwhile, health systems need strengthening at the community level, so that affordable, rapid and accurate diagnosis for effective treatment is available.A shift from presumptive to parasitological diagnosis should encompass substantial strengthening of microscopy testing for malaria parasites to reduce inappropriate exposure of poor rural communities to antimalaria drugs. Table 1 . Child and maternal age against blood smear results * = 9 women did not give their ages.
2017-06-16T20:15:30.602Z
2011-02-01T00:00:00.000
{ "year": 2011, "sha1": "be0171e4dedf6513c2b93fc4e2724894f5216431", "oa_license": "CCBY", "oa_url": "https://jidc.org/index.php/journal/article/download/21330741/495", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "be0171e4dedf6513c2b93fc4e2724894f5216431", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
75679811
pes2o/s2orc
v3-fos-license
Role of Fat Feeding on the Diabetic Albino Rats Ahmed H. Abdel-Rahman H. El-Rashedy1*, Mohamed H. Mostafa Wahdan2, Khaled Abdel-Fattah R. El-Sabban3, Mohamed Ali Khadrawy4, Tamer M. M. Abu-Amara5, Hanan A.Al-Hamaky6 and Nahid M. El-Hagar7 1Histopathology and cytopathology, Taif University, KSA 2Electron microscopic assessment, Taif University, KSA 3Department of medicine, Taif University, KSA 4Medical statistics, Taif University, KSA 5Histology, Al-Azhar University, Egypt 6Histopathology and cytopathology, Taibah University, KSA 7Department of medicine, Taif University, KSA Introduction Rapid nutritional modifications occur in the developing countries [1]. A change in these countries from a low-caloric foods containing low fat and high amount of fibers into a hypercaloric diet containing large quantities of refined carbohydrates, high total fat and red meats, along with low intakes of fiber is resorted to [2]. Widespread availability of unhealthy low-cost vegetable oils as well as their commercial use can lead to an increased dietary intake of fats and trans-fatty acids (TFAs) [3]. These faulty dietary habits contribute to high prevalence of metabolic syndrome, obesity and type 2 diabetes (T2D) triad [4]. Certain fatty acids are evidenced to influence the cellular metabolism through adjusting the equanimity between lipogenesis and fatty acid oxidation. Alteration of the amount and property of nutritional fats may modify the sensitivity of insulin [5]. Ingestion of large amount of fat is accompanied by high fasting serum insulin levels as well as lower sensitivity index of insulin [2]. Data from epidemiologic studies in developed nations showed more total fat consumption in T2D patients than in normoglycemic controls [6]. Individuals with good International Monetary Fund [10] correlated the intake of fats and oils in developing countries with hyperglycemia and diabetes and detected its associations with either obesity or insulin resistance or metabolic syndrome. Misra et al. [11] mentioned more detailed reviews of Asian Indian and South Asian diets particularly regarding insulin resistance and Non-Communicable Diseases (NCDs). High-fat diets in spite of promoting weight gain as well as resistance to insulin and may be considered in T2D development, however, secular data are limited regarding the entire dietary fat's relationships with obesity and the metabolic syndrome. Also, no data are available regarding T2D in developing countries. Importantly, the cost and availability of fats and oils determine their usage in developing countries. For instance, because of the high cost of olive oil, it is used sparingly, while mustard, sunflower, and soybean oils are used widely because of their low cost [2]. The high intake of traditional diets (containing full fatty milk as well as high amounts of fats, and oils) was associated with an increased incidence of abdominal obesity in adult Mongolian females [12]. The general guidelines, from the international organizations (WHO/FAO), for fat intake in the developing countries, particularly in view of the rise of T2D and coronary heart diseases, should be individualized keeping in mind the local dietary and cooking practices as well as the use of cooking oils in these countries [13]. The notion that high-fat diets are associated with impaired insulin action is strongly supported [14]. Saturated fats are particularly associated with greatest harmful effects including high risk of developing cardiovascular disorders. It has recommended that the entire fat intake must not exceed thirty percent of calories and favored diets have to be low in saturated fat [15]. Anyway, the entire fat as part of total body energy has not been significant to avoid T2D [16]. It has been regarded that T2D, being a multi-etiological disorder, has many pathogenetic mechanisms to be developed [17]. The mechanism of resistance to insulin and intolerance to glucose involves communication of the signaling cellular proteins through cellular membranes, across the cytoplasm as well as across the nuclear receptors in several tissues with target effects in certain organs such as pancreas, kidney, liver and brain in addition to adipose and muscular tissues. Type 2 diabetes may be accompanied by multiple steps through which nutrient effects could occur and various nutrients may be significant. Since the diet has a different action on insulin in earlier and later stages of diabetes, high serum insulin level associating the normal and abnormal glucose tolerance indicates the dietary role in these stages of the disease particularly the later ones that are characterized with greater stress of pancreatic β-cells [18,19]. The incidence of T2D is rapidly progressive throughout the world. In addition to the genetic basis, T2D associating obesity results from multiple bad behavioral and environmental etiological factors like deficient life activities and high caloric intake [20]. The association of risky fat intake with diabetes has been statistically undervalued through the modification of nutrient fat value as a diabetic hazardous factor [21]. Moreover, intake of cereals or caloric diet is statistically confounding variable because of its independent relationship to the risk of diabetes [22]. C-peptide protein is synthesized together with insulin synthesis within the human body. Preproinsulin is firstly released together with C-peptide, A-chain and B-chain as well as a sequence of signals. Secondly, the sequence of signals breaks off to leave insulin precursor. Thirdly C-peptide protein detaches in order to leave both A-& Bchains that produce the insulin. Insulin precursor; C-peptide, was firstly discovered since 1967 together with the revelation of insulin synthesis [23]. C-peptide protein acts as a significant link between Aand B-chains of insulin hormone. As well, this peptide promotes the effective assembly and folding as well as the insulin processing within B-cell endoplasmic reticulum [24]. The granules of pancreatic islets' B-cells store both insulin and C-peptide protein which are finally liberated into portal blood. C-peptide protein is considered as an indicator for the secretion of insulin and is valuable in understanding the disturbed physiological functions encountered in type 1diabetes (T1D) as well as T2D. Moreover, C-peptide protein has been discovered previously to affect both microvascular circulation and healthiness of tissue (25) . In addition, C-peptide protein bind to G-protein receptor found on the surface of many cells like the neurons, renal tubular lining cells, fibroblasts and vascular endothelial cells [25,26]. Intracellular calcium-dependent signaling pathways are activated upregulating the activities of certain enzymes such as endothelial nitric acid synthase (eNOS) and Sodium/ Potassium adenosine triphosphatase [27]. In type 1 diabetic patients, the activities of these two enzymes are reduced and being responsible for long -term neurological complications in those patients. Furthermore, the administration of C-peptide protein into type 1 diabetic animals has been associated with neurological and renal functional improvement [28,29]. Also, replacement C-peptide therapy of early neuropathy in diabetic animals has been accompanied with improvement of both functional and structural peripheral nerve impairment [27]. C-peptide protein therapy in nephropathic type 1 diabetic animals which had lack of C-peptide produced improvement in both renal functional and structural changes as well as reduction in the proteinuria and in diabetic glomerular changes following the expansion of the mesangial matrix [26]. The measurement of serum levels of C-peptide reflects the amount of pancreatic insulin synthesis since this peptide and this hormone are usually present in equal amounts. Because the concentration of insulin varies between the portal and the peripheral circulations, the serum levels of C-peptide substitute those of insulin to distinguish type 1 from type 2 diabetes. Since the pancreatic insulin release in type 1 diabetes is deficient, thus, the peptide level in this type of diabetes is usually declined. In contrast, the levels of C-peptide with type 2 diabetes are either higher or within normal. Therefore, the blood levels of C-peptide can reflect the type of diabetes. Measurement of this peptide in the diabetic patients undergoing external insulin injection will also assist to know their actual pancreatic function by determining the amount of their own produced insulin. In addition, the detection of high serum C-peptide levels may probably denote a functioning B-cell pancreatic tumor termed as insulinoma whose cells release excess insulin. It has been stated, as well, that C-peptide may act as an anti-inflammatory and may help in smooth muscle repair [24]. C-peptide may be very important indicator of type 2 diabetes regarding insulin secretion as the normal peptide level may denote pancreatic release of ample amount of insulin to which the patient's body doesn't properly respond [30]. This study aimed at induction of diabetes experimentally to evaluate effective fat feeding's role on diabetic rats as well as its histopathological changes on the liver & kidney depending on the hypothesis that the impairment or inhibition of receptor molecules that control the enzymes responsible for the oxidation and synthesis of fatty acids contributes to fat accumulation and infiltration. Studied animals & chemicals The study is performed on forty eight Albino male rats of Westar strain ranging in their body weight between 180-200 grams. The animals were administrated food and water. The ordinary laboratory diet was formed of protein in 21 percent, fats in 5 percent, crude fiber in 4 percent, ash in 8 percent, calcium in 1 percent, phosphorus in 0.6 percent, glucose in 3.4 percent, vitamin in 2 percent and nitrogen free carbohydrates extract in 55 percent while vanaspati fatty acid and coconut oil fatty acid composition was mentioned by Saravanan et al. [31]. Housing of the rats in the plastic cages had carried out in controlled situations of cyclical 12 hours light and 12 hours dark as well as being found at a temperature ranged between 22-26°C. Sterile streptozotocin powders were provided by Pharmacia Company in vials each of which included one gram of active ingredient of streptozotocin in addition to 200 milligrams of citric acid. The drug was dissolved in distilled water and kept at cold temperature ranged between 2-8°C and remained away from the light in a refrigerator. Indian vanaspati as well as Coconut oil were bought from Chidambaram, India while the triglyceride (TG) kit and total cholesterol kit (TC) as well as C-peptide kit had been purchased commercially (Wako, Osaka, Japan). The procedure of making fat emulsion was described by Zou et al. [32]. One hundred milliliters of emulsified fat involved 40 milliliters of vanaspati, 30 milliliters of coconut oil, 10 milliliters of Tween 80, 5 milliliters of propylene glycol and 15 milliliters of distilled water. The emulsion was kept at 4°C and was shaken with each application to obtain uniformity. Experimental plan The male Albino rats were separated into 4 groups each of which involved 12 rats resided as 6 rats in every plastic cage. The groups were labeled as: Group I or Control group: Non-diabetic animals that were received the standard Laboratory Diet (LD). Group II: Non-diabetic animals that were administrated 5 milliliters of fat-rich or Highly Fat (HF) emulsion. Group III: Diabetic animals that were provided with standard Laboratory Diet (LD). Group IV: Diabetic rats that were provided with 5 milliliters of highly fat emulsion. The serum glucose levels (expressed in mg/dl) and serum C-peptide levels (expressed in ng/ml) as well as serum lipids (expressed in mmol/l) in form of Total Cholesterol (TC) and triglycerides (TG) were estimated under non-fasted conditions on the start of this work. Diabetes induction in rats Streptozotocin drug was injected intravenously in a dosage of 60 mg/ kg. B.W in two animal groups (labeled groups III & IV). Diabetes was obtained in three days due to damage of pancreatic islet B-cells as stated by Karunanayake et al. [33]. All four groups of rats were retained in their cages under feeding control. The serum levels of C-peptide protein (in ng/ml), serum glucose (in mg/dl), serum triglycerides (TG ; in mmol/l) and total cholesterol (TC; in mmol/l) were measured ,under non-fasted conditions, every 2 weeks for 4 months according to Bhuyan et al. [34] for evaluation of the effective role of fat rich diet on diabetes since the association of excess fat intake with glucose-insulin metabolic disruption contributing in the development of T2D as well as its association with hyperlipidemia evidenced by hypercholesterolemia and hypertriglyceridemia attributing for organ ischemia that may interfere with fatty acid oxidation and eventually fatty infiltration within the parenchymatous organs. To adapt the animals for fat -rich oral intake, we increased gradually the dose of oral diet within the first five days from one into five milliliters to be stabilized and maintained thereafter at five milliliters daily dose. The animals administrated standard LD were received identical water volume through a tube inserted into the stomach while those, not treated with streptozotocin, were given the same isotonic saline volume. We daily monitored all animal groups to detect medicoclinical manifestations throughout the study. Estimation of levels of C-peptide protein, glucose, TG and TC in rat serum All animals were made unconscious by using ether anaethesia through communication with ether for 2 minutes which doesn't influence the values of serum glucose and C-peptide protein. Blood sample from each non-fasted rat tail was taken, put in a sterile tube and the serum is segregated by centrifugation and stored in a refrigerator at a temperature of 4°C to estimate the levels of serum glucose (mg/ dl) and C-peptide protein (ng/ml), which respectively indicates the extent of diabetes and reflects the pancreatic function regarding the insulin synthesis, in addition to serum TC (mmol/l) and TG (mmol/l) ,that may reflect the association of fat-rich diet and/or uncontrolled diabetes with hyperlipidemia, in a way similar to that reported by Levi et al [35]. The blood sampling and serum estimation were performed on all diabetic and nondiabetic animals periodically every 15 days for 4 months as stated by Thulesen et al. [36]. Histological examination Tissue specimens were taken from livers & kidneys of rats and each specimen was divided into three parts; one was prepared as frozen tissue and cut into sections to be stained with red oil stain. Another part was immediately fixed into10% formaldehyde saline and was used to make tissue blocks and tissue sections that were stained with hematoxylin & eosin. Both stained frozen and paraffin tissue slides were evaluated microscopically to demonstrate qualitatively the histopathological changes and quantitatively by imaging analysis. The third part was processed for ultrastructural examination of the micrometric and very thin tissue stained sections using the transmission electron microscope for detection of ultrastructural pathologic changes. Statistical analytic study The data were expressed as the mean ± SEM and represent the average values for the animals in the same group. Each analysis was repeated three times and the average was used to compare between the groups. These data were subjected to statistical analysis using SPSS Program performed in our Medical College Statistical Center, in order to display their significance. P values less than 0.05 were considered as indicative of significance Outcomes The study revealed significantly higher levels of serum glucose in group IV than in group III (p<0.05) while the levels were being within normal in group II and control group (120-140 mg/dl). In addition, serum total cholesterol (TC) and triglycerides (TG) levels were much greater in group IV diabetic rats compared to groups II and III (p<0.05) while the control group had normal serum lipid levels (Normal TC was <5.17 mmol/l and normal TG was <1.7 mmol/l). Moreover, much reduction in serum C-peptide protein levels had found in group IV in contrast to group III (P<0.05). The group II rats had insignificant slightly greater serum C-peptide protein levels (p>0.05) than control animal group which showed normal peptide values (0.5-3.0 ng/ml) (Tables 1 and 2 and Figures 1-4). In addition, light microscopic examination using the routine hematoxylin and eosin stain revealed more extensive intracellular fat accumulation; appearing as signet ring pattern due to displacement of hepatocytic nucleus against the cell membrane, in group IV (diabetic animals provided with high fatty food) ( Figure 5D) than in group II (non-diabetics received high fatty food) ( Figure 5B). This finding was confirmed by specific oil red stain that showed greater amount of intracellular red fat globules in group IV ( Figure 6D) than in group II hepatic specimens ( Figure 6B). Moreover, examination of routinely stained renal sections displayed much more fat vacuoles in the glomerular cellular cytoplasm and greater supranuclear fat vacuoles within the tubular lining cells of group IV ( Figure 7D) than group II animals ( Figure 7B). These features were evidenced, in oil red stained renal sections, by more intense red glomerular and tubular staining in group IV ( Figure 8D) than in group II rats ( Figure 8B). In addition, ultrastructural cytoplasmic black fat globules within the hepatocytes and renal tubular cells were more increased in diabetic fatty ( Figure 9B and 9D) than in non-diabetic fatty animals. The intracellular fatty infiltration detected histopathologically in routinely and specifically stained hepatic ( Figure 5C and 6C) and renal sections ( Figure 7C and 8C) as well as ultrastructurally was mild in group III (diabetic nonfatfed) rats. In contrast, control group animals didn't display neither hepatic ( Figure 5A The values in each group are represented as Means ± Standard deviation. Significant p values (p<0.05) of both C-peptide and serum glucose were detected between ‫)٭(‬ and ‫,)٭٭(‬ however, insignificant p values (p>0.05) of C-peptide were displayed between (G1 # ) and (G2 ## ). Group I (control or nondiabetic fed on ordinary lab diet); Group II (non-diabetic fed on fatty diet); Group III (diabetic fed on ordinary lab diet); Group IV (diabetic fed on fatty diet); Day 0 = The first day after fat feeding following induction of diabetes by injecting streptozotocin. Normal serum C-peptide was 0.5-3.0 ng/ml, serum glucose was 120-140 mg/dl. The values in each group are represented as Means ± Standard deviation. Significant p values (p< 0.05) were detected between ‫)٭(‬ and ‫)٭٭(‬ since the second week's values of the study and thereafter. Group I (control or nondiabetic fed on ordinary lab diet); Group II (non-diabetic fed on fatty diet); Group III (diabetic fed on ordinary lab diet); Group IV (diabetic fed on fatty diet); Day 0 = The first day after fat feeding following induction of diabetes by the injecting streptozotocin. Normal serum TC was < 5.17 mmol/l and normal TG was < 1.7 mmol/l. Week 12 Week 14 Week 16 Discussion Excessive overweight as well as T2D are apparently worldwide and are considered critical obstacles for the personal health. The type of food and lack of muscular exercise are accused and regarded as risk factors in the incidence of these diseases. Thus, the intake of fats and oils in developing countries is an important area of research due to fast increasing prevalence of these above mentioned metabolic diseases among these countries. Available data show an increase in supply and demand of various oils in developing countries. However, rare information is available concerning the total fat relationship with fat type in case of obesity and non insulin -dependent diabetes in most growing countries. Metabolic effects of consumption of specific oils are used widely in some developing countries (e.g. high saturated fat ghee or palm oil). Finally, head-to-head comparison of various oils regarding effects on diabetes, lipids, blood pressure and other cardiovascular risk factors is required [2]. This study revealed higher hyperglycaemic levels as well as greater hypercholesterolaemic and hypertriglyceraedemic values in fat-fed diabetic rats than diabetic ones fed on laboratory diet. However, fatfed non-diabetic rats showed normal serum glucose levels despite of high serum TC and TG levels. Hypertriglyceridemia resulting from the amplified triglyceride production has an indirect effect on impaired gluconeogenesis aggravated by chronically low insulin levels. Also, impairment of free fatty acids oxidation and increased their synthesis from acetone released in ketoacidotic hyperglycemia are associated with fatty accumulation within the liver causing its Week 12 Week 14 Week 16 Serum Triglycerides TG (mmol/l) GI (n=12) TG (mmol) GII (n=12) TG (mmol/l) GIII (n=12) TG (mmol/l) GIV (n=12) Figure 4: Graph of serum triglycerides in the studied four groups. characteristic enlargement of the liver. These results coincided with those detected by Boden [18] and Yang et al. [37] who reported an increase in certain intestinal peptide degrading enzyme in association with fat-rich nutrition and suggested the association of this intestinal enzyme with the development of non insulin -dependent diabetes. Moreover, Djousse et al. [38] discovered little and contradictory information regarding the relationship between serum glucose levels and omega three polyunsaturated fatty acids although these fatty acids were evidenced to decrease the hazards of myocardial ischemia and termination of life. They observed that fish-derived, but not vegetablederived, omega three polyunsaturated fatty acids enhanced the diabetic incidence. Furthermore, Kaushik et al. [39] compared the maximal and the minimal amounts of these fatty acids and observed in their study that these polyunsaturated fatty acids had been accompanied in their larger, but not smaller, doses with a moderately high risky diabetic incidence. Wang et al. [40] and Hodge et al. [41] reported contradictor information that the diabetic incidence had no relation to fish-derived polyunsaturated fatty acids. A significant decline in serum very low density lipoprotein level was displayed with the intake of omega three-rich food while the serum low density lipoprotein level had been insignificantly affected with this food regimen [42]. By contrary, Griffin et al. [43] detected raised low density lipoprotein following omega three management of type 2 diabetics because these fatty acids reduced intermediately dense lipoproteins' breakdown with consequent much more quantity of low density lipoproteins. In addition, Karlstrom et al. [42] found that high density lipoprotein cholesterol, on average, and apolipoprotein A-I were mostly not affected with supplementary omega three administration of T2D category. Furthermore, our study revealed much reduced serum C-peptide levels in fat-fed diabetic rats in contrast to non fat -fed diabetics. Also, the fat-fed non diabetic rats showed slightly increased C-peptide levels while this serum peptide levels were within normal in the control rats. The results agreed with those reported by Hills and Brunskill [24] who mentioned that the serum C-peptide insulin byproduct reflected the amount of liberated insulin by beta pancreatic cells. In addition, they stated that with inapparent type of diabetes mellitus, testing of serum C-peptide protein could be resorted to determine whether to be T1D or T2D. They mentioned that the pancreas does not make any insulin in T1D which is thus accompanied with a low amount of serum C-peptide protein which is elevated or being within normal value in T2D as well as in obesity. Moreover, serological testing of C-peptide protein aids in assessment the etiology of hypoglycemia whether due to an external insulin overdosage for diabetic therapy or to the presence of functioning endocrine neoplasm releasing ample amount of insulin. In hypoglycemia caused by insulin overdosage, the serum peptide level is decreased while hypoglycemia resulting from functioning neoplasm is accompanied with elevated serum peptide level [23]. Our study, as well, showed more intensely detected histopathological and ultrastructural features of parenchymal fat accumulation within the liver and kidneys of diabetic fat-fed rats than those of nondiabetic fat-fed animals. Furthermore, intracellular fatty infiltration was found in a mild degree within hepatic and renal parenchyma of diabetic nonfat-fed rats and was absent in nondiabetic nonfat-fed control animals. These findings were identical to those mentioned by Muhlfeld et al. [44] as well as by Buettner et al. [45]. They found negative apolipoprotein B staining within hepatocytes or in renal tissue of the normolipidemic control mice and mentioned that positive fatty infiltration detected in renal and hepatic tissue of diabetic mice, in particular, may resort to dysfunctional fat removal mechanism by the hepatocytes as well as via the mesangiocytic route. Furthermore, they focused on the environmental agents as well as personal habits especially lack of muscular activity as substantial risk factors in diabetes. Also, obesity and its related disorders like dyslipidaemia, diabetes as well as hypertension have actually accompanied excessive hypercaloric feeding. Moreover, Gauthier et al. [46] and Romestaing et al. [47] mentioned that fat-rich food intake can augment firstly the internal organic lipid content particularly inside parenchymatous organs such as liver, kidney and heart and can secondly result in a consequential fat accumulation peripherally and circumferentially under the skin. Therefore, excessive overweight, resistance to the secreted insulin especially intrahepatic and organic dysfunction may be obtained. The amount of intrahepatic deposited fat may drop but can enlarge with long-term fat-rich nutrition. Renal damage occurring with hypertriglyceridemia and hypercholesterolemia is suggested to be firstly caused by easy intracellular passage of fat aided by a deficient membranous barrier between the capillary circulating blood and glomerular mesangium together with the perforated vascular lining endothelium in the kidney. As a result the serum oxidized lipoproteins that gain their intracellular entrance can bind to the mesangial cells leading to their rapid multiplication, promotion of the release of chemicals mediating the inflammatory process (the same could occur within the liver) and activation of glycoprotein synthesis of the mesangial matrix. Secondly, oxidized harmful lipoprotein forms vascular atheromatous plaques that can narrow the lumen causing impaired renal perfusion. Thirdly, the deposited renal fats promote the release of chemotactic agents that attracts macrophages and provide them the gate to enter into the glomeruli, renal tubules and interstitium. Thus, these three mechanisms supposed in the affected animals can mediate renal injury. Also, a significant correlation had been found between the glomerular macrophage content and expression of transforming growth factor beta (TGF-β) which is considered as an essential cytokine facilitating the formation of matrix extracellularly [44]. Conclusion The findings of this study can conclude that hyperlipidemia may lead to a significant renal and hepatic injury particularly with diabetes that is considered as one of the metabolic disorders related to a disturbance in fat metabolism. Moreover, the fatty diet may aggravate the diabetic manifestations & therefore may enhance its complications. Recommendation In view of rapid rise of incidence of type 2 diabetes worldwide, so several knowledge plans illustrating its modifiable and non-modifiable risk factors as well as multiple popular health instructions particularly regarding the promotion of widespread use of healthy balanced nutrition together with establishment of the importance of physical activity are required to diminish the prevalence and complications of this disease. In addition, further studies correlating the role of obesity and hyperlipidemia with diabetes together with highlighting their biochemical effects on liver and renal functions are essentially recommended in the near future.
2019-03-13T13:27:50.540Z
2013-06-14T00:00:00.000
{ "year": 2013, "sha1": "4e9d04d325e117f12667983f316ced854ba5f6fb", "oa_license": "CCBY", "oa_url": "https://www.omicsonline.org/role-of-fat-feeding-on-the-diabetic-albino-rats-2157-7099.1000176.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "949a1d95856cd33ab8477443c4a254606177b5dc", "s2fieldsofstudy": [ "Medicine", "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
220608053
pes2o/s2orc
v3-fos-license
It is Better with Salt: Aqueous Ring-Opening Metathesis Polymerization at Neutral pH Aqueous ring-opening metathesis polymerization (ROMP) is a powerful tool for polymer synthesis under environmentally friendly conditions, functionalization of biomacromolecules, and preparation of polymeric nanoparticles via ROMP-induced self-assembly (ROMPISA). Although new water-soluble Ru-based metathesis catalysts have been developed and evaluated for their efficiency in mediating cross metathesis (CM) and ring-closing metathesis (RCM) reactions, little is known with regards to their catalytic activity and stability during aqueous ROMP. Here, we investigate the influence of solution pH, the presence of salt additives, and catalyst loading on ROMP monomer conversion and catalyst lifetime. We find that ROMP in aqueous media is particularly sensitive to chloride ion concentration and propose that this sensitivity originates from chloride ligand displacement by hydroxide or H2O at the Ru center, which reversibly generates an unstable and metathesis inactive complex. The formation of this Ru-(OH)n complex not only reduces monomer conversion and catalyst lifetime but also influences polymer microstructure. However, we find that the addition of chloride salts dramatically improves ROMP conversion and control. By carrying out aqueous ROMP in the presence of various chloride sources such as NaCl, KCl, or tetrabutylammonium chloride, we show that diblock copolymers can be readily synthesized via ROMPISA in solutions with high concentrations of neutral H2O (i.e., 90 v/v%) and relatively low concentrations of catalyst (i.e., 1 mol %). The capability to conduct aqueous ROMP at neutral pH is anticipated to enable new research avenues, particularly for applications in biological media, where the unique characteristics of ROMP provide distinct advantages over other polymerization strategies. High-Resolution Mass UV-Vis Spectroscopy. UV-Vis analysis was performed on a Thermo Scientific Evolution™ 350 spectrophotometer equipped with a Peltier heating and cooling system operating at 25 °C. Measurements were carried out using quartz cuvettes with a path length of 10.00 mm. Dynamic Light Scattering. Hydrodynamic diameters (Dh) and size distributions (PD) of nano-objects were determined by dynamic light scattering (DLS) using a Malvern Zetasizer Nano ZS with a 4 mW He-Ne 633 nm laser module operating at 25 °C. Measurements were carried out at an angle of 173° (back scattering), and results were analyzed using Malvern DTS v7.03 software. All determinations were repeated 4 times with at least 10 measurements recorded for each run. Dh values were calculated using the Stokes-Einstein equation where particles are assumed to be spherical. Matrix-Assisted Laser Desorption/Ionization Mass Spectrometry. Mass spectral data were collected using a Bruker-Daltonics Matrix Assisted Laser Desorption Ionization Time-of-Flight (MALDI-ToF) Autoflex III mass spectrometer in reflector mode with positive ion detection. Typical sample preparation for MALDI-ToF MS data was performed by making stock solutions in THF of matrix (20 mg mL -1 ), polymer analyte (2 mg mL -1 ), and an appropriate cation source (1 mg mL -1 Synthesis of P(MPEG)100 homopolymers via aqueous ROMP at different pH values A typical procedure for the synthesis of P(MPEG)100 homopolymers via aqueous ROMP at different solution pH values is described. Stock solutions were prepared at 1.5 mg mL -1 (G3) or 1.7 mg mL -1 (G2) in freshly purified THF or at 1.6 mg mL -1 (AM) in deionized water. Then, 100 uL of the G3 or G2 stock solution were added to a vial containing 0.9 mL of a 11.1 mg mL -1 MPEG solution in 100 mM phosphate buffer adjusted to pH = 2-7 using HCl. For AM, 100 uL of the stock solution was added to a vial containing 0.8 mL of a 12.5 mg mL -1 MPEG solution in 100 mM phosphate buffer adjusted to pH = 2-7 using HCl and 100 uL of THF. The solutions were rapidly stirred to initiate polymerization (final [MPEG] = 10 mg mL -1 ). After stirring for 2 h at room temperature, aliquots were removed from the polymerization solutions and analyzed using 1 H NMR spectroscopy to determine monomer conversion and SEC to calculate Mn and ƉM values, respectively. Synthesis of P(MPEG)100 homopolymers via aqueous ROMP in the presence of different additives A typical procedure for the synthesis of P(MPEG)100 homopolymers via aqueous ROMP in the presence of different additives is described. Stock solutions were prepared at 1.5 mg mL -1 of G3 in freshly purified THF addition of a few drops of EVE. Aliquots of each sample were removed to determine monomer conversion using 1 H NMR spectroscopy. In the case of polymerization in CH2Cl2, the solvent was removed in vacuo, and the residue was re-dissolved in 5 mL of DI H2O. All samples were purified via dialysis against DI H2O for 48 h. The samples were then lyophilized and transferred to 2 mL glass vials via dissolution in CH2Cl2 and were subsequently dried for 24 h in vacuo. The samples were analyzed using 1 H NMR spectroscopy, SEC, and MALDI-ToF mass spectrometry. Synthesis of P(MPEG)20 homo-, di-, and triblock polymers Stock solutions were prepared at 7.5 mg mL -1 of G3 in freshly purified THF and 11.1 mg mL -1 of MPEG in DI H2O containing 100 mM NaCl. Then, 100 μL of the G3 stock solution was added to a vial containing It should be noted that rapid monomer conversion was observed in the first ~2 min of the polymerizations, after which the polymerizations exhibited pseudo-first order kinetics (see Figure S3, for example). We attributed this initially fast phase of polymerization to turnover by the Ru-Cl2 complex prior to equilibration with the mono-and di-hydroxide species as shown below. nm. Isosbestic points typically occur in two-state processes in which one species is changing from its native state to another state without going through an intermediate species. [5][6] In addition, these points can only be observed when there is an equilibrium between the two species, and when individual spectra of the absorbing species cross at a given wavelength. Whilst it was not possible to assign the absorbances in the isosbestic regions to specific electronic transitions of AM due to lack of DFT computation evidence and reported absorbances for Ru-NHC catalysts in the literature, it was supposed that these points likely arise from absorbances corresponding to the AM-Cl2 and AM-(OH)n species. The presence of these isosbestic points provides significant evidence to support the supposition of a Ru-(OH)n / Ru-Cln equilibrium. S18 Figure S14. 1 H NMR spectra of native G3 in DMSO-d6 (black spectrum) and of G3 in DMSO-d6 after addition of 2 equiv. of NaOH. Table S7. Characterization data for P(MPEG)10-b-P(MMEG)n di-block copolymers prepared via ROMPISA using G3 in 9:1 v/v H2O/THF with 100 mM NaCl. Measurements were conducted on aliquots taken directly from the reaction vials and diluted 100× with DI H2O.
2020-07-18T13:06:46.511Z
2020-07-16T00:00:00.000
{ "year": 2020, "sha1": "14559937f0d0fe670669445fd3a2f96eee90e41a", "oa_license": "CCBY", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/jacs.0c05499", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "d83138c2581eddcdbee092307fcaf92fae47d84b", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
32595166
pes2o/s2orc
v3-fos-license
Construction and Comprehensive Analysis for Dysregulated Long Non-Coding RNA (lncRNA)-Associated Competing Endogenous RNA (ceRNA) Network in Gastric Cancer Long non-coding RNA (lncRNA) is a kind of non-coding RNA with transcripts more than 200 bp in length. LncRNA can interact with the miRNA as a competing endogenous RNA (ceRNA) to regulate the expression of target genes, which play a significant role in the initiation and progression of tumors. In this study, we explored the functional roles and regulatory mechanisms of lncRNAs as ceRNAs in gastric cancer, and their potential implications for prognosis. The lncRNAs, miRNAs, and mRNAs expression profiles of 375 gastric cancer tissues and 32 non-tumor gastric tissues were downloaded from The Cancer Genome Atlas (TCGA) database. Differential expression of RNAs was identified using the DESeq package. Survival analysis was estimated based on Kaplan-Meier curve analysis. KEGG pathway analysis was performed using KOBAS 3.0. The dysregulated lncRNA-associated ceRNA network was constructed in gastric cancer based on bioinformatics generated from miRcode and miRTarBase. A total of 237 differentially expressed lncRNAs and 198 miRNAs between gastric cancer and matched normal tissues were screened in our study with thresholds of |log2FC| >2 and adjusted P value <0.01. Eleven discriminatively expressed lncRNAs may be correlated with tumorigenesis of gastric cancer. Seven out of 11 dysregulated lncRNA were found to be significantly associated with overall survival in gastric cancer (P value <0.05). The newly identified ceRNA network includes 11 gastric cancer-specific lncRNAs, 9 miRNAs, and 41 mRNAs. Collectively, our study will contribute to improving the understanding of the lncRNA-associated ceRNA network regulatory mechanisms in the pathogenesis of gastric cancer and provide and identify novel lncRNAs as candidate prognostic biomarkers or potential therapeutic targets. Background Gastric cancer is one of the most common malignancies in the world. According to the data of cancer incidence and mortality worldwide, in 2012 there are 952 000 new cases of gastric cancer in the world each year and about 723 000 people die from it each year. It is one of the 3 leading causes of cancer death worldwide [1]. Due to its high prevalence, poor prognosis, and limited treatment options, gastric cancer is still an important clinical challenge worldwide. By analyzing a large gastric cancer patient clinical database, a retrospective study found that factors such as age, sex, tumor stage, and surgical method were associated with overall survival [2]. Therefore, identification of individualized treatment strategies, including potential biomarkers and therapeutic targets to combat gastric cancer, are urgently needed. The present study explores how the gastric cancer-specific lncRNAs act as ceRNAs to regulate target genes and participate in pathogenesis and prognosis of gastric cancer. Encyclopedia of DNA elements (ENCODE) the latest research results show that more than 90% of the human genome sequence can be transcribed, but only 1~2% of the sequence is used to encode proteins [3][4][5].At present, lncRNA genes have been cloned and identified more than 50,000 in the human genome, but so far only a small part of the biological function of lncRNA got the experimental verification. Studies found that these lncRNAs have important potential application prospect in the diagnosis, treatment and prognosis in malignant tumor [6][7][8]. However, lncRNAs how to regulate the expression of genes has remained unclear. At present, much effort is being made to reveal that lncRNAs how to performance diverse biological functions in the malignant tumor. In 2011, Salmena et al. proposed a competing endogenous RNA (ceRNA) hypothesis, which described a complex post-transcriptional regulatory network in which lncRNAs, mRNAs, and other RNAs act as natural miRNA sponges to suppress miRNA function by sharing 1 or more miRNA response elements (MREs) [9], which is supported by much evidence [10][11][12][13]. LncRNA as ceR-NA regulates gene encoding protein level and participate in the regulation of cell biology by competing with miRNAs. miR-NAs plays an important role in the ceRNA network through combining with target mRNA, inhibiting the action of mRNA expression [14]. It has been well-documented that the interaction between miRNAs and target genes is associated with tumor pathogenesis [15,16]. Studies have shown that each miR-NA can control up to hundreds of expressions of transcription, while each RNA transcription with different miRNAs response elements (MREs) may be targeted by multiple miRNAs [17]. In recent years, a growing number of studies have confirmed that the lncRNA -miRNAs -mRNAs regulation network plays an important role in tumor pathogenesis and progression, including breast cancer, gastric cancer, liver cancer, lung cancer, and kidney cancer, and other malignant tumors [11,12,[18][19][20][21]. LncRNAs that harbor similar sequences to their targeted miR-NAs can sequester miRNAs away from mRNAs. Poliseno et al. confirmed that lncRNA PTENP1 up-regulates expression of gene PTEN through acting as a molecular sponge adsorption miR -19 and miR-20a in prostate cancer and inhibits tumor cell growth [11]. In addition, lncRNA FER1L4 can influence the expression of the genes PTEN and RB1 by competitively combining with miR-106a-5p and participating in gastric cancer pathogenesis [22]. Therefore, lncRNA as ceRNAs have diverse biological functions that deserve further exploration. In addition, the analysis of gastric cancer-associated lncRNA-mediated ceRNA network in a whole genome is lacking, especially studies with large sample sizes. In this study, according to the analysis of RNA expression profiles between the 375 tumor tissues and 32 nontumor tissues of gastric cancer, we successfully established the gastric cancer-associated ceRNA network based on bioinformatics prediction and correlation analysis, which included 11 lncRNAs, 9 miRNAs, and 41 mRNAs. Study population A total of 375 gastric cancer cases were enrolled for comprehensive integrated analysis. The data were download from The Cancer Genome Atlas (TCGA) database. In addition, we used the Data Transfer Tool (provided by GDC Apps) to download the level 3 mRNASeq gene expression data, miRNAseq data of samples, and clinical information of those patients (https://tcga-data.nci.nih.gov/). The sequenced data was derived from Illumina HiSeq RNASeq and Illumina HiSeq_miRNA-Seq platforms. Our research meets the publication guidelines provided by TCGA (http://cancergenome.nih.gov/publications/ publicationguidelines). Differentially expressed analysis Gastric cancer mRNAseq and miRNASeq data derived from 407 samples, including 375 gastric cancer samples (cohort Tumor) and 32 normal samples (cohort Normal), were downloaded from TCGA. In addition, we merged tumor sample and normal sample data and deleted expressed data, which closed to zero. Compared to the normal group with gastric cancer, we used the "DESeq" package [23] in R software to identify the differentially expressed mRNAs (DEmRNAs) with thresholds of |log2foldChange(FC)| >2.0 and adjusted P value <0.01 and differentially expressed miRNAs with | log2FC| >1.5 and adjusted P value <0.01. We used the Encyclopedia of DNA Elements (ENCODE) to define and annotate the differentially expressed lncRNAs (DElncRNAs). In our study, we discovered the DElncRNAs from differentially expressed RNAs with the cut-off criteria of |log2FC| >2.0 and adjusted P value <0.01. Constructing the ceRNA network To clarify the roles of lncRNA and miRNA with mediated ceR-NA network, we built the co-expression network of differentially expressed genes, lncRNAs and miRNAs, visualized using Cytoscape v3.5.0 software. miRNA-targeted mRNAs were retrieved from miRTarBase (http://mirtarbase.mbc.nctu.edu.tw/). The targeted mRNAs of miRNAs were verified by experimental study using reporter assay, qRT-PCR, Western blot, microarray, and next-generation sequencing experiments in miRTarBase. To further improve the ceRNA network reliability, we retained mRNAs included in different expression of RNAs between tumor tissues and normal tissues. In addition, lncRNA-miRNA interactions were constructed based on miRcode (http://www. mircode.org/). Survival analysis To identify the prognostic DERNAs signature, combining the clinical data of those patients with gastric cancer in TCGA, we plotted the survival curves of those samples with differentially expressed lncRNAs, miRNAs and mRNAs by using the "survival" package in R. This univariate survival analysis was estimated based on Kaplan-Meier curve analysis. P values less than 0.05 were considered as significant. Identification of DEmRNAs and DEmiRNAs RNAs expression profiles of gastric cancer patients and corresponding clinical information were downloaded using the Data Transfer Tool of the TCGA database. We identified the significant DEmRNAs and DEmiRNAs in gastric cancer samples compared with the normal samples. A total of 2024 differentially expressed mRNAs and 198 miRNAs were identified by the "DESeq" package in R. Then, the heat map with complete linkage clustering of DEmRNAs and DEmiRNAs was built using the "gplots" package in R. (Supplementary Figures 1, 2). As a result, there were 1042 (51.48%) up-regulated and 982 (48.52%) down-regulated DEGs. Moreover, a total of 158 (79.79%) upregulated and 40 (20.21%) down-regulated DEmiRNAs were identified. The DERNAs were enriched in the KEGG pathway by KOBAS 3.0 (http://kobas.cbi.pku.edu.cn/), in order to preliminarily investigate the tumorigenesis of gastric cancer. We found that the DERNAs were mainly enriched in "Transcriptional misregulation in cancer, Metabolic pathways, and Chemical carcinogenesis", which are closely correlated with tumorigenesis (Table 1). Differentially expressed lncRNAs (DElncRNAs) in gastric cancer A total of 237 DElncRNAs were identified in our study, with thresholds of |log2FC| >2 and adjusted P value <0.01. To enhance the data reliability, those not annotated in ENCODE were removed. Finally, 11 DElncRNAs (9 up-regulated and 2 down-regulated) were identified in gastric cancer samples compared to the normal samples (Table 2). Subsequently, to explore the relationship between DElncRNAs and the prognosis of patients with gastric cancer, the overall survival for 11 DElncRNAs in gastric cancer patients was investigated using Kaplan-Meier curve analysis. We found that 7 of 11 DElncRNAs were considered as key DElncRNAs responsible for the prognosis of gastric cancer. As a result, 7 DElncRNAs were significantly associated with overall survival, lncRNA RP11-120K18.2 were positively correlated with overall survival, while the remaining 6 DElncRNAs were negatively associated with overall survival (log-rank P <0.05) ( Figure 1). Differentially expressed miRNAs (DEmiRNAs) in gastric cancer In our study, 198 DEmiRNAs were identified in gastric cancer samples compared with normal samples with thresholds of |log2FC| >1.5 and adjusted P value <0.01. Nine DEmiRNAs (5 up-regulated and 4 down-regulated) were selected from 198 gastric cancer-associated DEmiRNAs in TCGA data ( Table 3). As with the DElncRNAs, the overall survival for 9 DEmiRNAs in gastric cancer patients was also investigated using Kaplan-Meier curve analysis. Four out of 9 significant DEmiRNAs were significantly associated with overall survival (log-rank P <0.05), and 2 DEmiRNAs, mir-137 and mir-145, were demonstrated to be associated with high levels of DEmiRNAs and with poor prognosis. On the contrary, high levels of the remaining 2 DEmiRNAs, mir-96 and mir-183, were associated with prolonged patient survival time ( Figure 2). Construction of a ceRNA network in gastric cancer To better understand how lncRNA mediates mRNA through combining miRNA in gastric cancer, a ceRNA network graph was constructed based on the above data and visualized using Cytoscape v3.5.0. (Figure 3). We found that 11 DElncRNAs interact with the 9 DEmiRNAs retrieved in the miRcode database (Table 4). We searched for targeted mRNAs based on the 9 miRNAs using the miRTarBase database. MiRNAs targeted mRNAs not included in DERNAs (|log2FC| >1.0 and adjusted P value <0.01) were discarded. Each miRNA-mRNA pair was (Figure 4). Overall survival for DEmRNAs was investigated using Kaplan-Meier curve analysis. To better understand the KEGG pathways involved in the ceRNA network, the mRNAs were performed using KEGG pathway analysis by KOBAS 3.0, and the top 10 KEGG pathways were significantly enriched in our study ( Table 5). The signal pathways were cancer-associated, such as "MicroRNAs in cancer, Transcriptional misregulation in cancer, Pathways in cancer, Glioma, Pancreatic cancer and Melanoma". Moreover, we found that the ADAMTS9-AS2 maybe play an important lncRNA role in the ceRNA network. ADAMTS9-AS2 interacted with 6 miRNAs (mir-96, mir-137, mir-145, mir-182, mir-204, and mir-19a) and indirectly interacted with 37 miR-NA-targeted mRNAs in this network. We used different expression levels of DElncRNAs and DEmRNAs in regression analysis. Interesting, we found that the results uncovered a strong 41 positive correlation between the expression of DElncRNAs and DEmRNAs in the ceRNA network ( Figure 5). It revealed that DElncRNAs may indirectly interact with mRNAs through miRNAs in gastric cancer. For instance, ADAMTS9-AS2 interacted with MEIS1, TCEAL7, ZEB1, and ILK, mediated through miR-96, miR-145, miR-182, and miR-204. Our findings suggest that DElncRNA ADAMTS9-AS2 may serve as key regulator and prognostic marker in gastric cancer. Discussion In recent years, many studies have shown that lncRNA has important biological functions by regulating gene expression at various levels, including epigenetic regulation, transcription regulation, and post-transcription regulation [24,25]. More and more studies have shown that lncRNA and miRNA play a significant role in the pathogenesis and progress of tumors. There is a complex regulatory network relationship between them. Different studies have revealed that the expression of aberrant lncRNAs presents an opportunity for different types of cancer diagnostics, prognostics, and therapeutics [26]. A CeRNA hypothesis was proposed for the mechanism of tumorigenesis, providing an important new clue for direction of research for tumor diagnosis and treatment, providing a new guiding theory [9]. Compared with protein-coding genes, ln-cRNAs have significant advantages as diagnostic and prognostic biomarkers [27]. Several studies have confirmed that the differential expression of lncRNAs is closely related to the pathogenesis and prognosis of tumors, and can be used as a tumorassociated predictors [28,29]. Ren et al. revealed that 5 gastric cancer-specific lncRNAs (CTD-2616J11.14, RP1-90G24.10, RP11-150O12.3, RP11-1149O23.2, and MLK7-AS1) were significantly associated with the overall survival of patients with gastric cancer. They confirmed that 5-lncRNA was an independent predictor of prognosis by multivariate Cox regression analysis [30]. Zhang et al. demonstrated that PTENP1 acts as a ceRNA and participates in carcinogenesis and progression of gastric cancer by sponge miR-106b and miR-93 from targeting PTEN [31]. In addition, Liu et al. found that lncRNA HOTAIR overexpression modulates the derepression of HER2 and promotes the proliferation, migration, and invasion of gastric cancer cells by competitively combining with miR-331-3p/miR-124 [32]. HOTAIR is considered as a novel target for HER2-positive patients with gastric cancer who have high metastatic potential and poor survival. Then, Lü et al. reported that downregulation of ln-cRNA BC032469 resulted in a significant inhibition of proliferation of gastric cancer cells by directly binding miR-1207-5p to modulate the derepression of hTERT [33]. Moreover, Peng et al. found that lncRNA MEG3 inhibits gastric cancer cell proliferation, invasion, and migration by competitively binding the miR-181 family, upregulating Bcl-2, and suppressing gastric carcinogenesis [34]. In our study, 11 DElncRNAs were identified in gastric cancer samples compared with the normal samples. We found that 7 of them were significantly associated with overall survival, could be considered as a prognostic marker for gastric cancer. Moreover, we noted that the lncRNA ADAMTS9-AS2 and DLX6-AS1 were included in the ceRNA network. Therefore, we think these lncRNAs may play an important role in the pathogenesis and prognosis of gastric cancer. ADAMTS9-AS2 is an antisense overlapping lncRNA to ADAMTS9 which is mostly located upstream from ADAMTS9, and is considered as a new tumor suppressor. Expression of ADAMTS9-AS2 was down-regulated, which has been experimentally confirmed to be associated with glioma and non-small cell lung cancer cells (NSCLC), and ADAMTS9-AS2 expression may be correlated with poor prognosis of NSCLC and glioma through interaction with DNMT1(DNA methyltransferase 1) [35,36]. non-tumor tissues. Interesting, our analysis confirmed that decreased expression of ADAMTS9-AS2 was associated with good prognosis in patients with gastric cancer. Based on the above results, we think that lncRNA may compete with 3 key DEmiRNAs (miR-96 miR-145, and miR-182) to mediate the expression of TCEAL7, ZEB1, and ILK miRNAs target gene. We noted that these target genes were also significantly associated with overall survival. Then, we performed a regression analysis between the expression levels of ADAMTS9-AS2 and miR-NAs-targeted genes TCEAL7, ZEB1, and ILK that were involved in the newly identified ceRNA network. The results revealed a very strong positive correlation between ADAMTS9-AS2 and TCEAL7, ZEB1, and ILK expression levels. Furthermore, we found that DLX6-AS1 with high expression demonstrated a poor prognosis in patients with gastric cancer. Research has confirmed that the expression level of DLX6-AS1 was also up-regulated in lung adenocarcinoma tissues, and high expression levels of DLX6-AS1 were significantly associated with both histological differentiation and TNM stage [37]. Although lncRNA has received wide attention in recent years, miRNAs also warrant increased attention. There is no doubt that tumorigenic-related pathways of research based on regulation of miRNAs are indispensable. Disrupting those miRNAs may result in a permissive tumorigenic state. Dysregulated expression miRNAs in tumors is reported to play various roles in carcinogenesis. Studies found that miR-23b was overexpressed in gastric cancer patients compared with healthy controls and was associated with multiple clinical parameters, including T stage, distant metastasis, and differentiation [38]. In our study, we identified tumor initiation-related miRNAs in gastric cancer. In addition, we found that 4 of the miRNAs involved in ceRNA network are closely associated with survival in gastric cancer. Several studies have shown that an evolutionarily conserved miRNA cluster (miR-96, miR-182, and miR-183) was closely related to the occurrence and progress of gastric cancer, which are also considered as potential therapeutic targets [39,40]. Our survival analysis results indicated that the prognosis of gastric cancer patients with low expression of miR-96 and miR-183 is poor. Previous reports have revealed that miR-145 can inhibit invasion of gastric cancer cells by down-regulating cytoplasmic catenin delta-1 (CTNND1) expression and inducing the translocation of CTNND1 and E-cadherin [41]. Additionally, survival analysis demonstrated that low expression of mir-145 can prolong patient survival time. However, those findings need more research to identify whether those miRNAs have a specific role in tumorigenesis and prognosis of gastric cancer. The KEGG pathway involved in ceRNA network analysis results showed that miRNAs-targeted genes were mainly enriched "mi-croRNAs in cancer, transcriptional misregulation in cancer, and pathways in cancer" pathways. Seventeen of 40 miRNA-targeted genes were included in these 3 pathways and 3 of them (RECK, ZEB1, and KIT) were associated with overall survival. Several studies have shown that RECK and ZEB1 play an important role in the pathogenesis of gastric cancer [42,43].Our study revealed how specific lncRNAs interact with miRNAs and coding genes through the successful construction of lncRNA -miRNAs -mRNA ceRNA network in gastric cancer. Conclusions In conclusion, 11 cancer-specific lncRNAs were identified from hundreds of candidate lncRNAs in large-scale gastric cancer samples. The research indicates that dysregulation of the ceR-NA network can lead to tumorigenesis [44]. We found that some lncRNA were remarkably associated with overall survival in patient with gastric cancer. Importantly, we have successfully constructed a lncRNA-associated ceRNA network, which brings to light an unknown ceRNA regulatory network in gastric cancer. Our study will contribute to increased understanding of the pathogenesis of gastric cancer and provide novel lncRNAs as candidate prognosis biomarkers or potential therapeutic targets. Conflicts of interest None.
2018-04-03T04:56:23.841Z
2018-01-03T00:00:00.000
{ "year": 2018, "sha1": "8e30489fccda8fda5381010ca36e8663096d3f4b", "oa_license": "CCBYNCND", "oa_url": "https://europepmc.org/articles/pmc5761711?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "8e30489fccda8fda5381010ca36e8663096d3f4b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
10132621
pes2o/s2orc
v3-fos-license
Knock-Down of Endogenous Bornavirus-Like Nucleoprotein 1 Inhibits Cell Growth and Induces Apoptosis in Human Oligodendroglia Cells Endogenous bornavirus-like nucleoprotein elements (EBLNs) have been discovered in the genomes of various animals including humans, whose functions have been seldom studied. To explore the biological functions of human EBLNs, we constructed a lentiviral vector expressing a short-hairpin RNA against human EBLN1, which successfully inhibited EBLN1 expression by above 80% in infected human oligodendroglia cells (OL cells). We found that EBLN1 silencing suppressed cell proliferation, induced G2/M phase arrest, and promoted apoptosis in OL cells. Gene expression profiling demonstrated that 1067 genes were up-regulated, and 2004 were down-regulated after EBLN1 silencing. The top 10 most upregulated genes were PI3, RND3, BLZF1, SOD2, EPGN, SBSN, INSIG1, OSMR, CREB3L2, and MSMO1, and the top 10 most-downregulated genes were KRTAP2-4, FLRT2, DIDO1, FAT4, ESCO2, ZNF804A, SUV420H1, ZC3H4, YAE1D1, and NCOA5. Pathway analysis revealed that these differentially expressed genes were mainly involved in pathways related to the cell cycle, the mitogen-activated protein kinase pathway, p53 signaling, and apoptosis. The gene expression profiles were validated by using quantitative reverse transcription polymerase chain reaction (RT-PCR) for detecting these 20 most-changed genes. Three genes closely related to glioma, RND3, OSMR, and CREB3L2, were significantly upregulated and might be the key factors in EBLN1 regulating the proliferation and apoptosis of OL cells. This study provides evidence that EBLN1 plays a key role in regulating cell life and death, thereby opening several avenues of investigation regarding EBLN1 in the future. Introduction Up to 8% of the human genome is comprised of genetic material from human endogenous retroviruses (HERVs), which originated from the integration of retroviral DNA into chromosomes of 2 of 14 germline cells and subsequent inheritance in offspring [1]. Although most HERVs are inactivated or silenced by mutations or epigenetic modifications, they have served important functions in human evolution and speciation, and can potentially cause or contribute to diseases [2,3]. Mounting evidence has demonstrated that HERVs may be involved in the pathological processes of some neurological and psychiatric disorders, such as multiple sclerosis, schizophrenia, and bipolar disorder [4,5], and cancers such as melanoma, breast, prostate, and leukemia [6,7]. Thus, investigating HERVs is important for understanding the etiological mechanisms of certain diseases. Retroviruses are thought to be the only viruses that generate genomic HERV DNA insertions. Recently, sequences highly homologous to the nucleoprotein (N) gene of bornavirus, a non-retrovirus, were found in the genomes of several mammalian species, including the human genome, and designated as endogenous bornavirus-like N (EBLN) elements [8]. Bornavirus is a non-segmented, negative-sense RNA virus that is characterized by persistent infection in the cell nucleus [9,10]. Borna disease virus (BDV) is a mammalian bornavirus of the Bornavirus genus in the Bornaviridae family. BDV can infect many vertebrate species, including humans [11][12][13][14][15][16][17]. The BDV genome is approximately 8.9 kb long and contains 6 open reading frames (ORFs) encoding N, phosphoprotein (P), X protein (X), matrix protein (M), glycoprotein (G), and polymerase (L) [18]. BDV N is a major structural protein that serves an important role in the formation and transport of ribonucleoproteins [19][20][21]. Previous results showed that rodent EBLNs might play an important role in BDV infection. Species containing EBLNs could be protected against circulating bornavirus [22]. Similarly, EBLNs in the genome of the thirteen-lined ground squirrel could efficiently inhibit infection and replication of extant bornavirus by regulating the activity of the BDV polymerase [23]. Recently, Parrish et al. [24] reported that EBLNs can give rise to PIWI (P-element induced wimpy testis)-interacting RNAs (piRNAs), a class of small RNAs known to silence transposons, engendering a RNA-mediated, sequence-specific antiviral immune memory. Nevertheless, the functions of Homo sapiens EBLNs are still not well known. To date, a total of seven EBLNs have been found in the human genome [25]. The EBLN1 gene shows up to 58% similarity to the nucleotide sequences of BDV N gene, and contains a long ORF encoding a potential protein of 366 amino acids. Although the evidence of EBLN1 protein expression is lacking, EBLN1 mRNA expression has been confirmed by reverse transcription polymerase chain reaction (RT-PCR) in several cell lines including OL, HEK293T, and MOLT-4 cells [8,25], suggesting that EBLN1 might be a pseudogene or function as a noncoding RNA. Here, we report that EBLN1 silencing by short-hairpin RNA (shRNA)-expressing lentivirus could inhibit human oligodendroglia (OL) cell proliferation and induce apoptosis. Furthermore, the gene expression profiles of OL cells after EBLN1 knockdown were analyzed using a cDNA microarray. Our work will expand the field of functions of EBLN1 gene. Effective Reduction of Endogenous Bornavirus-Like Nucleoprotein 1 (EBLN1) mRNA Expression with an shRNA To explore the biological roles of EBLN1 in human OL cells, three target-specific EBLN1 shRNA expressing lentivirus and a negative-control shRNA expressing lentivirus were generated. After a 96-h lentivirus infection, EGFP (enhanced green fluorescent protein)-positive OL cells in each group were counted under a fluorescence microscope to determine the infection efficiencies. Those were 93.6%, 94.0%, 92.4%, and 95.0% in LV (lentivirus)-EBLN1-shRNA1, 2, 3, and LV-NC-shRNA group, respectively ( Figure 1). is comparable to GAPDH (glyceraldehyde-3-phosphate dehydrogenase), and LV-EBLN1-shRNA could markedly suppress EBLN1 ( Figure 2B). Thus, LV-EBLN1-shRNA1 was the most effective lentivirus for EBLN1 silencing in OL cells, and the interference effects were specific to EBLN1. Therefore, LV-EBLN1-shRNA1 was used in the EBLN1 knockdown group in the subsequent experiments. EBLN1 Silencing Inhibits Oligodendroglia (OL) Cell Proliferation To test the effects of EBLN1 knock-down on proliferation, CCK-8 (Cell Counting Kit-8) assays were performed. The results showed that cell growth was significantly inhibited in the LV-EBLN1-shRNA group, compared with control and LV-NC-shRNA groups. A significant reduction of cell proliferation was observed in the LV-EBLN1-shRNA group at 72-h post-inoculation (about 26%). The inhibition efficiency became more evident (up to 84%) at 5 days post-inoculation ( Figure 3A; p < 0.001). Meanwhile, the expression of EBLN1 was reduced by 86% at 5 days post-inoculation. For lacking of the evidence of EBLN1 protein expression, we only detected EBLN1 mRNA expression in OL cells by RT-qPCR to determine the interference efficiency. Compared with the LV-NC-shRNA group, EBLN1 mRNA expressions in three LV-EBLN1-shRNA groups were reduced by 81% (p < 0.001), 28% (p < 0.05), and 70% (p < 0.001), respectively. In addition, EBLN1 mRNA expression was comparable between the LV-NC-shRNA group and the uninfected group (p > 0.05) ( Figure 2A). The electrophoresis of quantitative reverse transcription polymerase chain reaction (qRT-PCR) products further confirmed that EBLN1 mRNA was highly expressed in OL cells, which is comparable to GAPDH (glyceraldehyde-3-phosphate dehydrogenase), and LV-EBLN1-shRNA could markedly suppress EBLN1 ( Figure 2B). Thus, LV-EBLN1-shRNA1 was the most effective lentivirus for EBLN1 silencing in OL cells, and the interference effects were specific to EBLN1. Therefore, LV-EBLN1-shRNA1 was used in the EBLN1 knockdown group in the subsequent experiments. is comparable to GAPDH (glyceraldehyde-3-phosphate dehydrogenase), and LV-EBLN1-shRNA could markedly suppress EBLN1 ( Figure 2B). Thus, LV-EBLN1-shRNA1 was the most effective lentivirus for EBLN1 silencing in OL cells, and the interference effects were specific to EBLN1. Therefore, LV-EBLN1-shRNA1 was used in the EBLN1 knockdown group in the subsequent experiments. EBLN1 Silencing Inhibits Oligodendroglia (OL) Cell Proliferation To test the effects of EBLN1 knock-down on proliferation, CCK-8 (Cell Counting Kit-8) assays were performed. The results showed that cell growth was significantly inhibited in the LV-EBLN1-shRNA group, compared with control and LV-NC-shRNA groups. A significant reduction of cell proliferation was observed in the LV-EBLN1-shRNA group at 72-h post-inoculation (about 26%). The inhibition efficiency became more evident (up to 84%) at 5 days post-inoculation ( Figure 3A; p < 0.001). Meanwhile, the expression of EBLN1 was reduced by 86% at 5 days post-inoculation. EBLN1 Silencing Inhibits Oligodendroglia (OL) Cell Proliferation To test the effects of EBLN1 knock-down on proliferation, CCK-8 (Cell Counting Kit-8) assays were performed. The results showed that cell growth was significantly inhibited in the LV-EBLN1-shRNA group, compared with control and LV-NC-shRNA groups. A significant reduction of cell proliferation was observed in the LV-EBLN1-shRNA group at 72-h post-inoculation (about 26%). The inhibition efficiency became more evident (up to 84%) at 5 days post-inoculation ( Figure 3A; p < 0.001). Meanwhile, the expression of EBLN1 was reduced by 86% at 5 days post-inoculation. EBLN1 Silencing Induces Apoptosis and Inhibits Colony Formation of OL Cells To determine the effects of EBLN1 gene silencing on apoptosis in OL cells, flow cytometry was performed with annexin V-APC (allophycocyanine) staining at 96-h post-inoculation. Our results showed that the percentage of apoptotic OL cells significantly increased in the LV-EBLN1-shRNA group (5.783 ± 0.138), compared with the LV-NC-shRNA (2.99 ± 0.232) and control groups (2.583 ± 0.313) ( Figure 3B; p < 0.001). Meanwhile, the expression of EBLN1 was reduced by 81% at 5 days post-inoculation. Colony-formation assays were conducted to gain insight into the long-term effects of EBLN1 silencing on cell proliferation. OL cells in each group were incubated for 14 days in 6-well plates and then the colony numbers were counted. The numbers of colony were 67 ± 2.65 in uninfected group, 63 ± 3.00 in LV-NC-shRNA group, and 30 ± 2.00 in LV-EBLN1-shRNA group. There were no significant differences between LV-NC-shRNA and control groups (p > 0.05). However, the colonyforming efficiency of the EBLN1 knockdown OL cells was markedly less than that of OL cells in the LV-NC-shRNA and control groups ( Figure 3C,D; p < 0.001). EBLN1 Silencing Induces G2/M Phase Arrest in OL Cells To determine the effects of EBLN1 silencing on cell-cycle control OL cells, a flow cytometry assay was performed when the expression of EBLN1 was reduced by 81% at 96-h post-inoculation. Compared with the LV-NC-shRNA group, the proportion of cells in S phase significantly decreased in the LV-EBLN1-shRNA group (p < 0.01), but that in G2/M phase significantly increased (p < 0.01) ( Figure 4). EBLN1 Silencing Induces Apoptosis and Inhibits Colony Formation of OL Cells To determine the effects of EBLN1 gene silencing on apoptosis in OL cells, flow cytometry was performed with annexin V-APC (allophycocyanine) staining at 96-h post-inoculation. Our results showed that the percentage of apoptotic OL cells significantly increased in the LV-EBLN1-shRNA group (5.783˘0.138), compared with the LV-NC-shRNA (2.99˘0.232) and control groups (2.583˘0.313) ( Figure 3B; p < 0.001). Meanwhile, the expression of EBLN1 was reduced by 81% at 5 days post-inoculation. Colony-formation assays were conducted to gain insight into the long-term effects of EBLN1 silencing on cell proliferation. OL cells in each group were incubated for 14 days in 6-well plates and then the colony numbers were counted. The numbers of colony were 67˘2.65 in uninfected group, 63˘3.00 in LV-NC-shRNA group, and 30˘2.00 in LV-EBLN1-shRNA group. There were no significant differences between LV-NC-shRNA and control groups (p > 0.05). However, the colony-forming efficiency of the EBLN1 knockdown OL cells was markedly less than that of OL cells in the LV-NC-shRNA and control groups ( Figure 3C,D; p < 0.001). EBLN1 Silencing Induces G2/M Phase Arrest in OL Cells To determine the effects of EBLN1 silencing on cell-cycle control OL cells, a flow cytometry assay was performed when the expression of EBLN1 was reduced by 81% at 96-h post-inoculation. Compared with the LV-NC-shRNA group, the proportion of cells in S phase significantly decreased in the LV-EBLN1-shRNA group (p < 0.01), but that in G2/M phase significantly increased (p < 0.01) ( Figure 4). EBLN1 Silencing Has No Effect on the Migration of OL Cells At 4 days after lentivirus infection, when the expression of EBLN1 was reduced by 81%, wound-scratch assays were performed to test cell migration capability. Our results demonstrated the wound-closure rates did not differ significantly between the LV-EBLN-shRNA and LV-NC-shRNA groups at 4 and 8 h after scratching (p > 0.05; Figure 5A,C). EBLN1 Silencing Has No Effect on the Migration of OL Cells At 4 days after lentivirus infection, when the expression of EBLN1 was reduced by 81%, wound-scratch assays were performed to test cell migration capability. Our results demonstrated the wound-closure rates did not differ significantly between the LV-EBLN-shRNA and LV-NC-shRNA groups at 4 and 8 h after scratching (p > 0.05; Figure 5A,C). Transwell migration assay results showed that the migration rates in the LV-EBLN1-shRNA, LV-NC-shRNA, and uninfected groups were 0.246 ± 0.028, 0.265 ± 0.013, and 0.286 ± 0.005, respectively. No significant differences were observed between these groups (p > 0.05; Figure 5B). Transwell migration assay results showed that the migration rates in the LV-EBLN1-shRNA, LV-NC-shRNA, and uninfected groups were 0.246˘0.028, 0.265˘0.013, and 0.286˘0.005, respectively. No significant differences were observed between these groups (p > 0.05; Figure 5B). Gene ontology analysis showed that most of differential genes were composed of cytoplasm, memerane, intracellular organelle ( Figure 6A). The biological processes regulated by these differential genes were mainly associated with multicellular organismal development, signal transduction, intracellular signaling cascades, and cell proliferation ( Figure 6B). Their molecular functions were mainly involved in transferase activity, enzyme regulator activity, DNA binding, phosphotransferase activity, kinase activity, and transcription factor binding ( Figure 6C). Pathway analysis demonstrated that they were mainly related to cell cycle, ubiquitin hydrolysis, mitogen-activated protein kinase (MAPK), p53, and apoptosis pathways ( Figure 6D). Gene ontology analysis showed that most of differential genes were composed of cytoplasm, memerane, intracellular organelle ( Figure 6A). The biological processes regulated by these differential genes were mainly associated with multicellular organismal development, signal transduction, intracellular signaling cascades, and cell proliferation ( Figure 6B). Their molecular functions were mainly involved in transferase activity, enzyme regulator activity, DNA binding, phosphotransferase activity, kinase activity, and transcription factor binding ( Figure 6C). Pathway analysis demonstrated that they were mainly related to cell cycle, ubiquitin hydrolysis, mitogenactivated protein kinase (MAPK), p53, and apoptosis pathways ( Figure 6D). Verification of Differential Genes by Quantitative Reverse Transcription Polymerase Chain Reaction (qRT-PCR) To validate the results of cDNA array, qRT-PCR was performed to detect the expressions of the top 20 most changed genes. GDPAH served as an endogenous control. As shown in Figure 7, the results of qRT-PCR were consistent with the gene expression profiles and confirmed that the microarray data were reliable. Three genes RND3, OSRM, and CREB3L2 were focused, for their closely relation to glioma. Compared with LV-NC-shRNA group, their relative expressions in EBLN1 knockdown OL cells were Verification of Differential Genes by Quantitative Reverse Transcription Polymerase Chain Reaction (qRT-PCR) To validate the results of cDNA array, qRT-PCR was performed to detect the expressions of the top 20 most changed genes. GDPAH served as an endogenous control. As shown in Figure 7, the results of qRT-PCR were consistent with the gene expression profiles and confirmed that the microarray data were reliable. Discussion BDV is an ancient neurotropic virus and the etiological agent of fatal encephalitis in horses and sheep. BDV is characterized by persistent infection in the nervous system of many animals. In experimental animals, such as rats and mice, BDV can induce cognitive deficiencies and behavioral alterations [26][27][28]. Epidemiologic studies have demonstrated that BDV can infect healthy humans and is possibly associated with some neuropsychiatric disorders including bipolar depression and schizophrenia [13,15,[29][30][31]. The discovery of EBLNs in the human genome further confirmed the closely relationship between BDV and humans. Thus far, the expression at transcriptional level of all seven human EBLNs has been confirmed. However, the functions of human EBLNs are yet not clear. Seeing that HERVs are involved in some human diseases, including cancers, such as endogenous retroviral LTR, K, and Fc-1 [6,32,33], here we investigated the biological functions of EBLN1 in human OL cells. The OL cells used in this study were permanent cell line from human fetal brain (details in Section 4.1). The expression level of EBLN1 in OL cells was firstly confirmed by qRT-PCR, although previous reports demonstrated that EBLN1 was detected in OL cells [8,25]. Our results showed that EBLN1 was highly expressed in OL cells, almost at the same level as GAPDH. Then, we constructed three lentiviral vectors expressing EBLN1 shRNAs and tested their inhibition efficiencies. Due to lack of evidence that the EBLN1 protein is expressed, we only measured the inhibition efficiencies at EBLN1 mRNA levels by qRT-PCR. Compared to the negative control, LV-EBLN1-shRNA1 lentiviral vectors could most efficiently reduce the expression of EBLN1 mRNA in OL cells by 81% after a 96-h infection. Therefore, LV-EBLN1-shRNA1 lentiviral vectors were used in subsequent experiments. Previous studies demonstrated BDV had diverse effects on cellular proliferation and apoptosis, depending on the virus isolate used and the host cell lines infected. BDV strain He/80 infection decelerated the proliferation of primary fibroblast cells from Lewis rats [20]. Moreover, BDV He/80 infection increased apoptosis of granule cell neurons in neonatal Lewis rats, but inhibited apoptosis of C6 rat astroglioma cells [34,35]. In addition, BDV strain Hu-H1, isolated from a bipolar patient by Bode in 1996 [11], inhibited cellular proliferation and promoted apoptosis in OL cells via Bax upregulation and Bcl-2 downregulation, contrary to laboratory BDV strain V [36]. Considering that EBLN1 shares high identity with BDV N gene and N protein is an important Three genes RND3, OSRM, and CREB3L2 were focused, for their closely relation to glioma. Compared with LV-NC-shRNA group, their relative expressions in EBLN1 knockdown OL cells were 8.24˘0.29, 3.60˘0.39, 5.25˘0.37, respectively (Figure 7). Discussion BDV is an ancient neurotropic virus and the etiological agent of fatal encephalitis in horses and sheep. BDV is characterized by persistent infection in the nervous system of many animals. In experimental animals, such as rats and mice, BDV can induce cognitive deficiencies and behavioral alterations [26][27][28]. Epidemiologic studies have demonstrated that BDV can infect healthy humans and is possibly associated with some neuropsychiatric disorders including bipolar depression and schizophrenia [13,15,[29][30][31]. The discovery of EBLNs in the human genome further confirmed the closely relationship between BDV and humans. Thus far, the expression at transcriptional level of all seven human EBLNs has been confirmed. However, the functions of human EBLNs are yet not clear. Seeing that HERVs are involved in some human diseases, including cancers, such as endogenous retroviral LTR, K, and Fc-1 [6,32,33], here we investigated the biological functions of EBLN1 in human OL cells. The OL cells used in this study were permanent cell line from human fetal brain (details in Section 4.1). The expression level of EBLN1 in OL cells was firstly confirmed by qRT-PCR, although previous reports demonstrated that EBLN1 was detected in OL cells [8,25]. Our results showed that EBLN1 was highly expressed in OL cells, almost at the same level as GAPDH. Then, we constructed three lentiviral vectors expressing EBLN1 shRNAs and tested their inhibition efficiencies. Due to lack of evidence that the EBLN1 protein is expressed, we only measured the inhibition efficiencies at EBLN1 mRNA levels by qRT-PCR. Compared to the negative control, LV-EBLN1-shRNA1 lentiviral vectors could most efficiently reduce the expression of EBLN1 mRNA in OL cells by 81% after a 96-h infection. Therefore, LV-EBLN1-shRNA1 lentiviral vectors were used in subsequent experiments. Previous studies demonstrated BDV had diverse effects on cellular proliferation and apoptosis, depending on the virus isolate used and the host cell lines infected. BDV strain He/80 infection decelerated the proliferation of primary fibroblast cells from Lewis rats [20]. Moreover, BDV He/80 infection increased apoptosis of granule cell neurons in neonatal Lewis rats, but inhibited apoptosis of C6 rat astroglioma cells [34,35]. In addition, BDV strain Hu-H1, isolated from a bipolar patient by Bode in 1996 [11], inhibited cellular proliferation and promoted apoptosis in OL cells via Bax upregulation and Bcl-2 downregulation, contrary to laboratory BDV strain V [36]. Considering that EBLN1 shares high identity with BDV N gene and N protein is an important BDV antigen, we focused on the effect of EBLN1 on cellular proliferation and apoptosis in OL cells. Our results showed that the cellular proliferation significantly decreased from 72 h to 5 days after lentivirus infection, which was consistent with the results that the inhibit efficiency of LV-EBLN1-shRNA reached above 80% after a 96-h lentivirus infection. Moreover, the colony formation of EBLN1 silencing OL cells was noticeably decreased. Cell cycle was measured at 96 h after lentivirus infection by flow cytometer. Compared to the LV-NC-shRNA group, the OL cells in S-phase significantly decreased, but the cells in G2/M phase significantly increased in the LV-EBLN1-shRNA group. Taken together the results of cell proliferation, our results showed that EBLN1 silencing could induce G2/M phase arrest in OL cells. To confirm this result, cell cycle assay in our next works should be taken after synchronization by inducing G2/M or S arrest. Moreover, the apoptosis of OL cell was tested after a 96-h lentivirus infection. The results displayed that EBLN1 silencing increased apoptosis, which was consistent with our results that the inhibit efficiency of LV-EBLN1-shRNA reached over 80% after a 96-h lentivirus infection, and the cell growth of EBLN1 silencing OL cells was decreased at 4 days post-infection. However, the migration abilities of OL cells in both wound-scratch and transwell migration assays were not significantly changed by EBLN1 silencing. Sine these migration assays might be affected by the cell death, these results of migration assays should be further evaluated in our future works. To explore the mechanism underlying these findings, we further investigated differentially expressed genes in EBLN1 knockdown OL cells by microarray analysis. Our results revealed that 3071 genes were dysregulated in OL cells after EBLN1 silencing, which suggested EBLN1 was a key gene involved in multi-functions in OL cells. Pathway analysis demonstrated that these differentially expressed genes mainly affect cell-cycle progression, apoptosis, MAPK, and p53 signaling pathways, which was supported by the alterations in proliferation and apoptosis of EBLN1 knockdown OL cells. qRT-PCR was performed to validate the results of a cDNA array. The mRNA expressions of the top 20 most-changed genes were consistent with the gene expression profiles, which confirmed that the microarray data were reliable. Some of these most-changed genes were closely related to glioma, such as RND3, OSMR, and CREB3L2. These genes were frequently dysregulated in glioma cells and played important roles in regulating cell growth and increased apoptosis. We presumed that these three genes might be the key factors in EBLN1 regulating the proliferation and apoptosis of OL cells. RND3 (also named RhoE) is an atypical member of the Rho GTPase family. RND3 inhibits Rho kinase-mediated biological functions including actin cytoskeleton formation, cell transformation, proliferation, and apoptosis. The role of RhoE in cancer is currently controversial [37][38][39]. In human glioblastoma, RND3 expression was found to be significantly decreased, which caused increased Notch-pathway activity and enhanced glioma cell proliferation [40]. RND3 functioned as a negative regulator of the Notch pathway by promoting the ubiquitination and degradation of Notch transcriptional complex [40]. In this study, RND3 was upregulated after EBLN1 silencing in OL cells, and MAML2, coded a coactivator for Notch protein, was downregulated. This suggested that one of the mechanisms of EBLN1 regulating the proliferation and apoptosis of OL cells might be RND3 induced repression of Notch-pathway. The oncostatin M receptor (OSMR) gene encodes a subunit of the type-II receptor for oncostatin M (OSM). OSM is an IL-6 family cytokine that is associated with multiple biological processes and cellular responses including growth, differentiation, and inflammation [41]. High expression of OSMR was observed in glioblastoma, especially in the mesenchymal subtype, and was regarded as a prognostic risk factor. OSM-OSMR signaling regulated the pathological progression of glioma via STAT3, a key transcription factor involved in the Janus kinase-signal transducer and activator of transcription (JAK/STAT) signaling pathway, which regulates the expression of genes involved in diverse functions such apoptosis, proliferation, and differentiation [42]. Unlike with glioblastoma, OSRM was upregulated after EBLN1 silencing in OL cells. Meanwhile, we found some genes in the JAK/STAT signaling pathway were downregulated, such as STAT6 and its reactor NCOA1, and anti-apoptotic proteins (Bcl-2 and MCL1). Therefore, the effects of EBLN1 on cell proliferation and apoptosis might be related with the downregulation of anti-apoptotic proteins through JAK/STAT pathway, special STAT6, regulated by OSM-OSMR signaling. CREB3L2 (also known as BBF2H7), an endoplasmic reticulum stress transducer, belongs to the cAMP responsive element-binding (CREB)/activating transcription factor (ATF) family. ATF5, another member of the CREB/ATF family, is a target gene of CREB3L2. Previous findings demonstrated that CREB3L2 could suppress apoptosis via the ATF5-MCL1 pathway in growth plate cartilage [43]. ATF5 is highly expressed in primary brain tumors, especially in glioblastoma, and plays a key role in promoting cancer cell survival through the CREB3L2/ATF5/MCL1 pathway. In this pathway, induction of CREB3L2 by a RAS (renin-angiotensin system)/MAPK or phosphatidylinositol 3-kinase (PI3K) signaling pathway can activate ATF5, which promotes survival by stimulating transcription of the anti-apoptotic MCL1 (myeloid cell leukemia 1 protein) protein [44]. In our work, the expression of ATF5 had no statistical differences between the EBLN1 silencing group and negative control group. However, other members of the CREB/ATF family including ATF2 and ATF3 were downregulated. Though the relation of upregulated CREB3L2 and downregulated ATF2 and ATF3 was unknown, the effects of EBLN1 silencing on cell proliferation and apoptosis might be explained by the low expressions of ATF2, ATF3 and MCL1. Cells and Culture The human OL cell line [35] was kindly provided by Hanns of the Berlin Free University, Berlin, Germany and Liv Bode, Robert Koch Institute, Berlin, Germany. OL cells were permanent human oligodendroglia cell line, established by Y. Iwasaki from human fetal brain at the Wistar Institute, PA, USA, and then allocated to Hanns Ludwig in the early 1980s. OL cells were cultured in Dulbecco's Modified Eagle's Medium (DMEM, Invitrogen, Grand Island, NY, USA) with high glucose (4.5 g/L, Gibco, Grand Island, NY, USA), supplemented with 1% penicillin, 1% streptomycin (Sigma, Shanghai, China), and 5% heat inactivated fetal bovine serum (FBS, Gibco, Grand Island, NY, USA). Cells were incubated at 37˝C in a 5% CO 2 incubator. Construction of an EBLN1 shRNA-Expressing Lentiviral Vector and Infection We designed EBLN1 shRNA and negative control shRNAs based on the mRNA sequence of human EBLN1 (NM_001199938). OL cells were divided into 3 groups: uninfected group (CON), LV-NC-shRNA group (NC), and LV-EBLN1-shRNA group (KD). OL cells infected for 16 h with LV-EBLN1-shRNA or LV-NC-shRNA in enhanced infection solution at a multiplicity of infection of 2 and subsequently placed in fresh medium. After 72 h, the fluorescence of OL cells was measured and the infection efficiency was calculated as the ratio of fluorescent OL cells to all OL cells. Detection of EBLN1 mRNA Expression Quantitative reverse transcription polymerase chain reaction (qRT-PCR) experiments were performed to detect EBLN1 mRNA expression. Total RNA was extracted from OL cells in each group with the Trizol reagent (Invitrogen) and quantified with a NanoDrop spectrophotometer (NanoDrop Technologies, Wilmington, DE, USA). To avoid the presence of contaminating DNA, total RNA was treated with DNase (Invitrogen) according to manufacturer's instructions, followed by reverse transcription into cDNA (Takara, Toyoto, Japan). The sequences of primers used to amplify EBLN1 cDNA were as follows: 5 1 -ACCTAGCAACAGCAGCAAACTA-3 1 (forward) and 5 1 -CAAATCCCGAAATCCCATAAC-3 1 (reverse). The sequences of primers used to amplify the reference gene GAPDH were as follows: 5 1 -GGTCTCCTCTGACTTCAACA-3 1 (forward) and 5 1 -AGCCAAATTCGTTGTCATAC-3 1 (reverse). PCR were performed using 2 µL of cDNA in a Corbett Research Rotor-Gene 6000 Thermocycler (Corbett Research, Mortlake, Australia) in 25 µL reaction mixtures. Thermocycling conditions consisted of an initial denaturation step for 10 min at 94˝C, followed by 40 cycles of 94˝C for 30 s and 56˝C for 45 s. PCR was repeated in 3 independent experiments. Relative expression levels of EBLN1 mRNA were normalized against that of GAPDH, using the 2´∆ ∆Ct method. Meanwhile, 5 µL PCR products of were electrophoresed on 2% agarose gels. Cell Proliferation Analysis Cell proliferation was analyzed using the Cell Counting Kit-8 (CCK8; Beyotime, Shanghai, China). OL cells were plated in each well of 96-well plates at a density of 2000 cells in 100 µL culture medium. At the same time, OL cells in LV-EBLN1-shRNA group were infected with lentivirus. After various incubation periods ranging from 1 to 5 days, 10 µL of CCK-8 solution was added the cells and incubated for 2 h at 37˝C. The absorbance was measured at 450 nm with an ultraviolet spectrometer (Bio-Rad, Shanghai, China). The experiments were performed in quadruplicate and repeated in triplicate. Analysis of Apoptosis Cell apoptosis was measured by annexin V-APC staining. After 96-h incubation, infected cells were washed twice with PBS and resuspended in binding buffer at a density of 5ˆ10 5 cells/mL, followed by the addition of 10 µL Annexin V-APC. After gentle mixing, the cells were incubated for 15 min at room temperature in the dark. The cells were analyzed by flow cytometry within 1 h. Colony-Formation Assay After 96-h of lentivirus infection, 800 OL cells in each group were plated into each well of 6-well plates and cultured for 14 days at 37˝C in a 5% CO 2 incubator. Then, cells were rinsed twice with PBS and fixed in l mL of paraformaldehyde for 30 min. The cells were stained with Giemsa stain for 20 min and then washed with ddH 2 O. The plates were dried at room temperature and colonies containing more than 50 cells were counted under a light microscope. Cell Cycle Analysis After 96-h of lentivirus infection, OL cells (1ˆ10 6 ) in each group were washed with cold phosphate-buffered saline (PBS), fixed in 70% ethanol at 4˝C for at least 2 h, and stained with 0.5 mL propidium iodide solution for 30 min in the dark. Measurements of DNA contents and cell-cycle analysis were performed by flow cytometry (BD Biosciences, San Jose, CA, USA). All experiments were repeated in 3 independent experiments. In Vitro Wound-Healing Assay After a 96-h lentivirus infection, 3ˆ10 4 OL cells were seeded into 96-well plates and grown at 37˝C in a 5% CO 2 incubator. When the confluence reached 90%, the medium was removed, and a wound in the monolayer was made using a pipette tip, followed by washing with PBS thrice to remove the non-adherent cells. The wound area was photographed immediately after wounding and at 0, 4, and 8 h post-wounding. The width of the wound was measured, and the migration rates were calculated. Transwell-Migration Assay After a 96-h lentivirus infection, transwell-migration assay was performed. Transwell chambers were placed in plates, and 100 µL of serum-free medium was added into the upper chambers. After incubation at 37˝C for 2 h, the medium was removed. Subsequently, 1ˆ10 5 infected cells were re-suspended in 100 µL serum-free medium and added to the upper chambers, and medium containing 30% FBS was added to the lower chambers, followed by incubation at 37˝C in a CO 2 incubator for 24 h. After removing the non-migrated cells, the upper chambers were stained with Giemsa for 20 min, after which the migrated cells were observed under a microscope and resolved in 10% acetic acid. Then, the absorbance (optical density) at 570 nm (OD 570 nm) was measured. Meanwhile, 5000 infected OL cells were incubated in 96-well plates at 37˝C for 4 h. After 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) stain, the absorbance at 490 nm (OD 490 nm) was measured with a spectrophotometer. The migration rate was calculated by using the following equation: migrating rate (%) = OD 570 nm /OD 490 nm . Microarray Analysis The cDNA microarray was analyzed using GeneChem Technology (Shanghai, China). Briefly, total RNA was extracted from OL cells after a 96-h lentivirus infection, then cDNA was synthesized and transcribed into biotin-labeled sRNA using the GeneChip 3 1 IVT Express Kit. Subsequently, the sRNA was hybridized to a PrimeView™ Human Gene Expression Array Plate (Affymetrix, Shanghai, China), which was scanned using a GeneChip Scanner 3000. The differently regulated genes with absolute fold-change values ě2 and p-value ď0.05 were used for subsequent gene ontology and pathway analysis. Verification of Differential Genes by Quantitative Reverse Transcription Polymerase Chain Reaction (qRT-PCR) The top 20-most changed genes were elected and verified by qRT-PCR using the primers listed in Supplementary Materials Table S2. The primers for GAPDH (Glyceraldehyde-3-phosphate dehydrogenase) and the reaction conditions of qRT-PCR were described as in Section 4.3, except the annealing temperature is 58˝C. Statistical Analysis Statistical analysis was performed with SPSS software, version 19.0 (IBM Corporation, New York city, NY, USA). Qualitative data were expressed as the mean˘SD. Differences between groups were analyzed with Student's t test. A value of p < 0.05 was considered statistically significant. Conclusions In summary, we focused on the cellular biological functions of EBLN1 in human OL cells. Knock-down of EBLN1 by lentivirus-mediated shRNA suppressed proliferation and induced apoptosis. Numerous genes were dysregulated by EBLN1 silencing, some of which may be key target genes of EBLN1, such as RND3, OSMR, and CREB3L2. Our work provides meaningful data and offers a new direction for further studies on EBLN1. Though we proposed the possible mechanisms involved in the function of EBLN1, more work will be done to confirm them or discover other underlying molecular mechanisms in future studies.
2016-04-23T08:45:58.166Z
2016-03-24T00:00:00.000
{ "year": 2016, "sha1": "1d70d3f8e871956ef301f88a96b11d04bfffd99e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/17/4/435/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1d70d3f8e871956ef301f88a96b11d04bfffd99e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
239469763
pes2o/s2orc
v3-fos-license
Maintenance of Postharvest Quality and Reactive Oxygen Species Homeostasis of Pitaya Fruit by Essential Oil p-Anisaldehyde Treatment The performance of p-Anisaldehyde (PAA) for preserving pitaya fruit quality and the underpinning regulatory mechanism were investigated in this study. Results showed that PAA treatment significantly reduced fruit decay, weight loss and loss of firmness, and maintained higher content of total soluble solids, betacyanins, betaxanthins, total phenolics and flavonoids in postharvest pitaya fruits. Compared with control, the increase in hydrogen peroxide (H2O2) content and superoxide anion (O2•−) production was inhibited in fruit treated with PAA. Meanwhile, PAA significantly improved the activity of antioxidant enzymes superoxide dismutase (SOD), peroxidase (POD) and catalase (CAT). Moreover, PAA-treated pitaya fruit maintained higher ascorbic acid (AsA) and reduced-glutathione (GSH) content but lower dehydroascorbate (DHA) and oxidized glutathione (GSSG) content, thus sustaining higher ratio of AsA/DHA and GSH/GSSG. In addition, activities of ascorbate peroxidase (APX), glutathione reductase (GR), monodehydroascorbate reductase (MDHAR) and dehydrogenation ascorbic acid reductase (DHAR), as well as the expression of HpSOD, HpPOD, HpCAT, HpAPX, HpGR, HpDHAR and HpMDHAR, were enhanced after PAA treatment. The findings suggest that postharvest application of PAA may be a reliable method to control postharvest decay and preserve quality of harvested pitaya fruit by enhancing the antioxidant potential of the AsA-GSH cycle and activating an antioxidant defense system to alleviate reactive oxygen species (ROS) accumulation. Introduction Pitaya fruit (Hylocereus undatus) is a tropical fruit originated from Latin America [1]. According to pulp and peel colour, pitaya fruit is classified into white flesh/yellow peel, white flesh/red peel, and red flesh/red peel fruits [2]. Owing to its desirable taste and texture, and abundant health-promoting compounds, the cultivation and consumption of pitaya have been growing substantially in the recent years [3]. Although pitaya is a non-climacteric fruit, it deteriorates and senesces rapidly after harvest due to the susceptibility to fungal diseases, and physiological disorders leading to shrinkage, thus limiting its storage and marketing potential [4,5]. Several reported treatments, such as cold storage [6], controlled atmosphere [7], plant hormone [8], X-ray irradiation [9] and synthetic chemicals [10], have been proved to control the postharvest diseases and fruit quality deterioration at varying degrees. Nevertheless, there is a continuing search for safer, low-cost, potent senescence inhibitors and antimicrobial technology to maintain quality of harvested pitaya fruit. Essential oils are now increasingly used for the preservation of several fruits and vegetables due to its safety and antimicrobial properties. p-Anisaldehyde (PAA) (4methoxybenzaldehyde) is a main component of the essential oil derived from seeds of Pimpinella anisum [11]. In laboratory media, fruit purees and fruit juices, PAA is confirmed to possess antimicrobial activities against a number of foodborne bacteria, such as Bacillus subtilis, Pseudomonas aeruginosa, Listeria monocytogenes, Fusarium oxysporum, and Staphylococcus aureus, yeasts (Candida) and mold strains (Aspergillus niger) [12]. Recently, p-Anisaldehyde/β-cyclodextrin combination as a fumigation agent effectively suppressed the growth of fungi in strawberry and preserved its storage quality [13]. It implies that PAA might be regulating postharvest physiological and biochemical behaviour of horticultural products. However, the potential of PAA on controlling postharvest deterioration of other postharvest fruits, and its underlying regulatory mechanisms, remains largely unknown. Postharvest senescence and fruit quality deterioration involve metabolic disorder of reactive oxygen species (ROS) [14]. Overproduction of ROS, including superoxide anion radicals (O 2 •− ), hydrogen peroxide (H 2 O 2 ), and hydroxyl radicals (OH − ) trigger oxidative damage to macromolecules, resulting in irreversible, deleterious changes in living cells [15]. ROS production is interlinked with ROS scavengers which encompass ROS enzymatic and non-enzymatic systems [16]. Enzymatic scavengers mainly include superoxide dismutase (SOD), peroxidase (POD), catalase (CAT), and the following enzymes involved in ascorbic acid-glutathione (AsA-GSH) cycle: ascorbate peroxidase (APX), glutathione reductase (GR), monodehydroascorbate reductase (MDHAR), and dehydroascorbate reductase (DHAR) [17]. The non-enzymatic scavenger system includes AsA, GSH, α-tocopherol, flavonoids, carotenoids and proline [18]. Mounting evidence from several decades indicates that excessive ROS generation caused by the disruption in the ROS production-scavenging balance can damage the cellular membrane structure, accelerate cell death, and reduce storability of harvested fruits, such as table grape [19], winter jujube [20], blueberries [21], and mango [22]. On the contrary, postharvest treatments, such as near-freezing temperature [23], acidic oxidizing water [24], and essential oils [25] for harvested fruits are proved to retain higher capacity of antioxidant and ROS scavenging ability, which help reduce pathogen infection and maintain fruit quality. In this sense, ROS homeostasis may serve as a common regulatory mechanism for fruits to control senescence process and maintain fruit quality. Thus, in this work, the changes in physio-chemical properties related to fruit quality, total phenolics and flavones contents, 2,2-diphenyl-1-picrylhydrazyl (DPPH)-free radical scavenging rate, ROS generation, activities of ROS-scavenging enzymes, and components in ASA-GSH cycle in postharvest pitaya that received PAA treatment during storage were investigated. This research aimed to determine the role of ROS metabolism in PAAmediated maintenance of fruit quality in postharvest pitaya fruit, as well as to validate the effectiveness of PAA treatment as an eco-friendly, safe and promising preservation method for extending the shelf life of harvested pitaya fruits. Materials and Treatments Red flesh/red peel pitaya (Hylocereus polyrhizus cv. 'Guanhuahong') fruits were harvested at the mature stage (~35 d after flower anthesis) from a commerical orchard in Guangzhou, China, and they were transferred to the laboratory immediately. Fruit with uniform shape, colour, size and no physical injuries and disease symptoms were selected and divided randomly into two groups (210 fruits in each group) for the following treatments. The specific treatment procedures were conducted as follows: (1) PAA treatment-fruits were sprayed with 1 mM PAA solution until the PAA covered the fruit surface uniformly. PAA, at 1 mM, was chosen as the optimum concentration according to a preliminary experiment (data not shown). (2) Fruits evenly sprayed with distilled water served as control group. Thereafter, all treated fruits were air-dried, placed into a plastic box and stored at 20 • C for 15 d with 85-90% relative humidity. Each treatment comprises three replicates, and samples of 10 pitaya fruits selected from each replicate were taken at Day 0 and at 3-day intervals for assessment of firmness and total soluble solids (TSS). Simultaneously, from the same samples flesh was collected and rapidly frozen at −80 • C for further analysis. For each parameter measurement, there were three replicates in each treatment at each time interval. Determination of Fruit Physio-Chemical Quality Every 10 fruits from each replicate were used for decay assessment. Decay incidence was measured based on the spoilage area with a scale composed of 0-5 degrees (0 = absence of decay; 1 =< 10% decay area; 2 = 10-25%; 3 = 25-50%; 4 = 50-75% and 5 => 75%), as described by Liu et al. [10]. The result of decay index was calculated by the equation: Decay incidence (%) = ∑ (decay scale × number of fruit in each scale) (5 × total number of fruit) × 100 (1) Ten pitaya fruits per replication were weighed at Day 0 and at three-day interval during storage period. Weight loss was expressed as a percentage of weight lost compared to the initial weight. Fruit firmness was measured at three equatorial points of the peeled fruit, using a GY-4 durometer equipment with a cylinder probe (12 mm diameter). The result was expressed as the N. TSS content was assessed by squeezing the fruit from the firmness test onto a digital refractometer (PAL-1, Atago, Japan) and was expressed as a percentage. Betalain was extracted by homogenizing 0.5 g of sample with 5 mL of 80% methanol (v/v) solution by sonication for 10 min, and then centrifuged. Extraction was conducted twice. Betaxanthins and betacyanins were measured using a previously described method [26] through spectrophotometry at wavelengths of 478 nm and 538 nm, respectively. Content of both betalains was expressed as mg 100 g −1 of fresh weight (FW). Measurement of Generation Rate of Superoxide Anion Radicals (O 2 •− ) and Hydrogen Peroxide (H 2 O 2 ) Concentration Production rate of O 2 •− and H 2 O 2 content in pitaya pulp were determined using a kit (Comin, Suzhou, China), following the procedures of manufacturer's instructions. NaNO 2 was used as the standard for calculating the generation rate of O 2 •− , which was expressed as nmol g −1 min −1 FW. H 2 O 2 content was calculated with a standard curve constructed by H 2 O 2 , and expressed as µmol g −1 FW. Assessment of Activity of Superoxide Dismutase (SOD), Peroxidase (POD), and Catalase (CAT) Activity of SOD, POD, and CAT was determined using the biochemical kit (Comin, Suzhou, China) following the guidelines of manufacturer. The activity of these enzymes was expressed as unit (U) g −1 FW. Determination of Components in Ascorbic Acid-Glutathione (ASA-GSH) Cycle The metabolites in the ASA-GSH cycle mainly include AsA, dehydroascorbate (DHA), GSH and oxidized glutathione (GSSG), and their contents were determined according to the methods reported previously [27]. Content of ASA and DHA was calculated using ASA as a standard and were expressed as nmolg −1 FW. GSH and GSSG contents were calculated based on a standard curve of GSH and GSSG, respectively. The results of GSH and GSSG were expressed as µmol g −1 FW. Activity of ascorbate peroxidase (APX), glutathione reductase (GR), monodehydroascorbate reductase (MDHAR), and dehydroascorbate reductase (DHAR) was measured using the reported methods [28]. The activity of all these enzymes was expressed as U g −1 FW. Determination of Content of Total Phenolics, Flavonoids, and Scavenging Rate of DPPH Radical Total phenolics and flavonoids contents were measured in accordance with the procedure as described by Han et al. [29]. The total phenolics content was calculated using the gallic acid as the standard, and result was expressed as mg of gallic acid equivalents (GAE) per gram of fresh weight (mg g −1 FW). The total flavonoids content was expressed as mg of rutin equivalent per gram of fresh weight (mg g −1 FW). The scavenging rate of DPPH radical was determined by a biochemical kit (Comin, Suzhou, China). The absorbance of the reaction system at 515 nm was determined, and the result was finally expressed in percentage terms. Gene Expression Analyses of Antioxidant Enzymes Total RNA of pitaya fruit was extracted with EASYspin Plus Plant RNA kit (Aidlab Biotech, Beijing, China), following the manufacturer's instruction. Hifair™II 1st Strand cDNA Synthesis Super Mix for qPCR and Hieff ® qPCR SYBR Green Master Mix (No Rox) (YEASEN Biotech, Shanghai, China) were employed to synthesize cDNA and to preform quantitative real-time PCR (qRT-PCR), respectively. HpActin1 was selected as the internal control [30]. Gene expression was expressed relative to the expression level of HpActin1. The primers used in this study are listed in Supplementary Table S1. Statistical Analysis All data presented are means ± standard error of three biological replicates and were subjected to analysis of variance (ANOVA) using SPSS software. Mean values were compared using a Duncan's test to the significance level (p < 0.05 or p < 0.01). Effects of PAA on Visual Appearance and Physio-Chemical Quality Properties of Pitaya Fruits during Storage Visual appearance of pitaya fruits in both groups almost remained unchanged in the initial six days of storage ( Figure 1), but shrinkage of bracts and peel, and slight decay symptoms were observed in control fruit on Day 9. Further observations showed that decay, bract degreening, and water loss were more evident in control fruits compared with their counterparts in PAA treatment after 12 d of storage. Comparatively, PAA application maintained better freshness and appearance. On Day 15, the whole fruits decayed extensively in the control, while PAA treatment considerably delayed fruit decay. Decay of pitaya fruit was significantly inhibited by PAA treatment (Figure 2A). The decay index was reduced from 70.66% in control to 44.67% in PAA-treated fruit ( Figure 2A). Moreover, as shown in Figure 2B, fresh weight decreased throughout storage irrespective of treatment; however, the weight loss in the control group was more pronounced compared to that treated with PAA throughout the experiment. After 15 d of storage, weight loss of PAA-treated fruit was 26.85% lower (p < 0.01) than that of control fruit. Figure 2C showed that fruit firmness of pitaya decreased continuously over the entire storage period. PAA treatment suppressed the loss of firmness during the entire storage. At the final storage time, fruits that were sprayed with PAA still retained firmer (8.24 N) than the control group (7.22 N). For TSS content, regardless of treatment, TSS content of pitaya fruit decreased linearly with storage time. Compared with the initial value (19.07%), TSS content in control pitaya fruits was decreased by 18.56% (p < 0.05) at the end of storage, while higher TSS content was observed in PAA-treated fruits throughout storage ( Figure 2D). The contents of betacyanins in postharvest pitaya fruits gradually increased until 9 days after treatment while it decreased over the rest of storage time ( Figure 2E). Although no statistically significant differences between two groups were found during the first 9 d, PAA treatment maintained the betacyanins content. On Day 12, the content of betacyanins in pitaya fruits treated with PAA was significantly higher than that of control group, which was 1.14-fold that of control (p < 0.05). A similar variation was observed for the betaxanthins content of PAA-treated and control fruit ( Figure 2F). Betaxanthin contents in PAA-treated fruit reached the maximum at 11.47 mg 100 g −1 FW on Day 9, which was 11.90% higher than that in control fruits. The asterisks indicated significant difference between two treatments during the same storage period (* p < 0.05, ** p < 0.01). •− production rate in all both treatments significantly increased from storage Day 1 to Day 9, after which the levels were declined gradually until Day 15 ( Figure 3A). However, the generation rate of O 2 Effects of PAA on Generation •− in PAA-treated fruits was lower than that in control fruits throughout the storage. Vertical bars represent the standard error of the mean. The asterisks indicated significant difference between two treatments during the same storage period (* p < 0.05, ** p < 0.01). H 2 O 2 content increased in control fruits with storage time ( Figure 3B). The accumulation of H 2 O 2 in control fruits increased from an initial value of 0.23 µmol g −1 FW to a maximum of 0.44 µmol g −1 FW after 15 d of storage. PAA treatment significantly inhibited H 2 O 2 production, in which the concentration of H 2 O 2 on Day 12 was 19.74% lower than that in control fruits (p < 0.01). Effects of PAA on POD, SOD and CAT Enzymatic Activity and Gene Expression in Pitaya Fruits during Storage Activity of SOD and POD exhibited a similar trend, which rose considerably increase at early storage and dropped at the late storage period ( Figure 4A,B). The maximum values of POD activity in PAA-treated fruits, and the SOD activity in both control and PAA treatment were all found on the third day. However, control group had the highest level on the twelfth day. Moreover, PAA treatment improved the activity of SOD and POD, with 28.00% and 28.53% higher (p < 0.01) than those of control pitaya fruits on Day 9, respectively. CAT activity in control fruits stayed at a stable low level during the whole storage. Until Day 15, CAT activity in PAA-treated fruits was 1.16 times than it was in control ( Figure 4C). As depicted in Figure 4D-F, there was a similar tendency between SOD, POD and CAT enzymatic activity and gene expression. The expression of HpSOD, HpPOD and HpCAT was obviously enhanced by PAA treatment, and a significant difference was found in the expression level of HpPOD throughout the storage. Effects of PAA on Metabolite Content in ASA-GSH Cycle of Pitaya Fruits during Storage As shown in Figure 5A,B, as storage time progressed, the contents of AsA and DHA in postharvest pitaya fruits peaked on Days 9 and 12, respectively, and then declined. AsA content in fruits treated with PAA was significantly higher than that of control fruits, however, DHA content in PAA-treated pitaya fruits was lower than that of control during the entire storage period. Application of PAA improved the ratio of AsA/DHA in pitaya fruits ( Figure 5C). The ratio of AsA/DHA in PAA-treated pitaya fruits was 27.56% and 41.71% higher (p < 0.01) than that of control fruits on third and fifteenth day, respectively. A gradual increase in GSH content was observed in both PAA-treated and control fruits ( Figure 5D). Though values of GSH content in both groups showed no difference from storage Day 3 to Day 12, a higher level of GSH content was recorded in PAA-treated fruits during the whole storage. GSSG contents between PAA treatment and control group followed a similar trend, which increased slightly in the early storage period and declined afterwards ( Figure 5E). The content of GSSG in PAA-treated pitaya was significantly lower than that in the control fruits on Days 9 and 15. Furthermore, the ratio of GSH/GSSG in pitaya was remarkably increased by PAA treatment compared with control ( Figure 5F). Effects of PAA on the Activity and Gene Expression of AsA-GSH Pathway Related Enzymes in Pitaya Fruits during Storage As shown in Figure 6A, APX activity of pitaya fruits increased within the first 6 d, and fluctuated over the rest of storage, irrespective of treatment. PAA treatment resulted in significant increases in APX activity during most of storage. On Days 6 and 12, APX in PAAtreated fruits was 1.11 and 1.30 times higher (p < 0.01) than that of control, respectively. GR activity in both PAA-treated and control fruits increased steadily, and reached the maximum level on Day 12, and then declined for the remainder of storage ( Figure 6B), but the rate of decline in PAA treatment during late storage was considerably less pronounced than that in the control. DHAR activity, which was higher at 3 d of storage, tended to decline during storage. However, significant differences (p < 0.05) of 13.32% and 17.86% over the controls were found after 6 d and 15 d of storage, respectively ( Figure 6C). Furthermore, MDHAR activity fluctuated to a greater extent in pitaya fruits during storage ( Figure 6D). In comparison to the control, MDHAR activity in the PAA-treated group was higher during the whole storage period, and the difference was extremely significant at 6 d, 9 d, and 12 d, which was 1.21-, 1.19-and 1.34-fold (p < 0.01) of control group, respectively. Effects of PAA on Content of Total Phenolics, Total Flavonoids and DPPH Radical-Scavenging Rate of Pitaya Fruits during Storage The total phenolic and flavonoids content in control fruits was lower than that in the PAA-treated fruits over the storage period. Total phenolic in control samples declined from Day 9, whereas pitaya fruits in PAA treatment began to decrease from Day 12. Compared with untreated control, PAA-treated fruits showed 0.35-and 0.2-folds higher total phenolic and flavonoids after 15 d of cold storage, respectively ( Figure 7A,B). DPPH radical scavenging rate in both PAA-treated and control fruits during the experiment was shown in Figure 7C, with a persistent decline, except for values at 6 d. However, this decrease was suppressed by PAA treatment. On Day 15, the DPPH free radical scavenging rate of fruits under PAA treatment was 3.46% higher than that of the control. Discussion Postharvest decay is a main limitation for the commercial value and storage life of pitaya fruit. With antimicrobial and insecticidal activity, essential oils are accepted as a prospective option for controlling postharvest fruit quality and safety [31]. The finding of the current study demonstrates that PAA treatment efficiently reduced the decay incidence of pitaya fruits (Figure 2A), which was consistent with the previous studies indicating that PAA could enhance resistance against disease development caused by green mold and blue mold of citrus fruits [32]. In addition, postharvest pitaya fruits undergo a loss of freshness which is characterized by a decline in bract greenness, increased weight loss, decreased fruit firmness and soluble solids [33]. In the present study, visual changes in skin and bract colour obviously varied between PAA treatment and control groups ( Figure 1). Furthermore, the result here exhibited that PAA treatment efficiently reduced the weight loss, and delayed the decline in firmness, and TSS ( Figure 2B-D) in pitaya fruits during storage. A little lower weight loss observed in PAA-treated fruit than that in the control fruit is possible due to the fact that the PAA functions as a coating agent on the surface of the pitaya fruit, impedes loss of moisture from the fruit. Virtually, weight loss is reportedly interlinked with respiration rate, thus, it is worth exploring the effect of PAA on fruit respiration in the future. As wilting incurred, the firmness of the fruit decreased during storage, while application of PAA maintained higher firmness, which both inhibited the rate of fruit softening and made the fruit less prone to mechanical and microbial damage. Conversely, Lin et al. reported that the fumigation using free PAA induced the loss of firmness, lightness of the surface color and cause a higher water loss [13]. A possible explanation for such opposite results is the differences in species and/or concentrations. Moreover, as a big reservoir of bioactive phytochemicals, pitaya contains betacyanins with remarkable pharmacological values [34]. In this study, a higher content of betacyanins and betaxanthins was retained in PAA-treated fruit as compared with control fruit ( Figure 2E,F), which not only functioned as antioxidants but also contributed to maintenance of visual appearance, as the red colour of pitaya fruit is attributed to betacyanins. Therefore, these results suggest that PAA might suppress tissue decay and maintain the nutritional and flavour qualities of pitaya fruits. According to the available reports, oxidative damage resulting from imbalance in both antioxidant response and ROS-generation affect fruit quality, fruit senescence and resistance to pathogens in most non-climacteric fruits [35]. SOD, POD, and CAT are most studied antioxidant enzymes. SOD [27]. Enhancing activity of antioxidant enzymes and their associated gene ex-pression to modulate cellular redox homeostasis was previously shown to delay senescence and quality deterioration in various fruits. For example, Chen et al. indicated that enhanced activity of SOD, CAT and APX under 1-methylcyclopropene (1-MCP) treatment contributes to eliminating O 2 •− and maintaining the quality of pears [36]. Melatonin-induced fruit senescence inhibition has been shown to involve enhanced SOD, CAT, APX and POD activities [37]. Moreover, in pear up-regulation of PcSOD and PcCAT as well as enhanced activity of SOD and CAT reduced H 2 O 2 production, leading to delayed senescence [38]. In the current study, ROS level in pitaya fruits increased as senescence progressed during storage (Figure 3). PAA markedly improved expression of HpSOD, HpPOD and HpCAT ( Figure 4D-F) accompanied by increased activity of SOD, POD and CAT ( Figure 4A-C) in harvested pitaya fruits during storage, which led to a lower level of O 2 •− and H 2 O 2 content in PAA-treated pitaya fruits ( Figure 3A,B). These findings indicate that the effect of PAA on reducing accumulation of ROS in pitaya fruits was correlated to the enhanced ROS-scavenging ability at both enzymatic and transcript levels, which, in turn, mitigated oxidative damage and the development of decay and senescence. Out of the antioxidant enzymes, AsA and GSH have the direct capacity of quenching ROS. In addition, GSH participates in regeneration of AsA through AsA-GSH cycle to remove excess ROS [39]. In the AsA-GSH cycle, APX uses AsA as a substrate to catalyze the reduction of H 2 O 2 to H 2 O with concomitant production of MDHA, but owing to its unstable property, MDHA can dismutate into DHA or is regenerated into AsA through MDHAR, and DHA is further reduced to AsA by DHAR using reducing equivalents from GSH [40]. GR, a relevant component of the AsA-GSH cycle, catalyzes the conversion of GSSG to GSH form, allowing the maintenance of GSH/GSSG ratio [40]. The protective role of AsA and GSH as well as the ratio of AsA/DHA and GSH/GSSG in enhancing oxidant stress tolerance to delay senescence and maintain quality has been reported in several horticulture products [38,41,42]. Furthermore, given the importance of AsA-GSH cycle in antioxidant and stress resistance, key enzymes and genes involved in this cycle have also been extensively studied. Recently, Zhang et al. reported that 1-MCP induced expression of AdAPX, AdDHAR and AdGR but inhibited two isoforms of AdMDHAR expression, which was conducive to elevating the AsA content, scavenging H 2 O 2 , and postponing the senescence of kiwifruit [41]. A similar result was also found in bell pepper, where enhanced AsA-GSH cycle reduced H 2 O 2 , and O 2 •− content, overcoming the physiological disorders during cold storage [43]. Comparably, in our present study, the transcription of HpAPX, HpGR, HpDHAR, HpMDHAR was boosted by PAA treatment ( Figure 6E-H), which was consistent with the PAA-enhanced activity of APX, GR, DHAR, and MDHAR ( Figure 6A-D). Additionally, these enzymes together with higher ratio of AsA/DHA and GSH/GSSG, and lower DHA and GSSG contents ( Figure 5A-F) explained the observed lower production of ROS under PAA treatment (Figure 3). These collectively indicate that the increased antioxidant capacity following PAA treatment, resulting in higher content of metabolites, enzyme activity and transcript abundance of genes involved in AsA-GSH cycle, play a vital role in ROS detoxification and redox state maintenance during postharvest storage of pitaya fruits. In addition, total phenolics, and flavonoids as non-enzymatic antioxidant also fulfill a crucial role in protecting cells from oxidative damage. In plants, DPPH radical-scavenging capacity is generally used to evaluate the total non-enzymatic antioxidant capacity [44]. It has been reported that increasing DPPH radical-scavenging ability, total phenolics and flavonoids is positively correlated with the reduction of ROS and suppression of oxidative events enhanced in postharvest pitaya fruits treated with diphenyliodonium iodide [45], apple polyphenols [1], and methyl jasmonate [3]. In the present study, the decrease of DPPH radical scavenging rate was delayed by PAA treatment (Figure 7C), which was also accompanied by higher contents of total phenolic and flavonoids compared with control ( Figure 7A,B), which partially help activate antioxidant responses and inhibit overproduction of ROS. Conclusions Based on above discussion, it is clear that postharvest application of PAA significantly dampened senescence and tissue decay, and effectively maintained the overall quality index of pitaya fruit. The enhanced postharvest disease resistance and quality preservation by PAA treatment might be associated with the reduction in ROS level and an increase in antioxidant capacity. The data suggest that this was attained through enhanced level of total phenolics, flavonoids, and DPPH radical scavenging, and increased gene expression and activity of SOD, POD and CAT as well as AsA-GSH cycle. The present research may help further elucidate the mechanism underpinning PAA-mediated preservation of postharvest pitaya fruit quality.
2021-10-16T15:16:46.196Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "0efbab019b34dbeeee75720fc4f358c41e5f4725", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2304-8158/10/10/2434/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "067c3137a8dc1c2e38858b06ab3eed3e5c9cac16", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
233797153
pes2o/s2orc
v3-fos-license
Aerobic oxidative CC bond cleavage of aromatic alkenes by a high valency iron-containing perovskite catalyst High-valency iron-containing perovskite catalyst BaFeO3−δ could efficiently promote the additive-free oxidative CC bond cleavage of various aromatic alkenes using O2 as the sole oxidant. Aerobic oxidative CC bond cleavage of aromatic alkenes by a high valency iron-containing perovskite catalyst † Satomi Shibata, Keigo Kamata * and Michikazu Hara * High valency iron-containing perovskite catalyst BaFeO 3−δ could efficiently promote the additive-free oxidative CC bond cleavage of various aromatic alkenes to the corresponding aldehydes or ketones using O 2 as the sole oxidant. This system was applicable to the gram-scale oxidation of 1,1-diphenylethylene, in which 2.71 g (75% yield) of the analytically pure ketone could be isolated. The oxidative CC bond cleavage of alkenes into the corresponding carbonyl compounds is an important reaction in both laboratories and chemical industry because aldehydes and ketones are useful synthetic intermediates for the production of perfumes, dyes, and pharmaceuticals. [1][2][3] Stoichiometric oxidants such as O 3 , m-chloroperbenzoic acid, KMnO 4 , CrO 2 Cl 2 , RuO 2 , and OsO 4 are typically utilized to accomplish efficient oxidative CC bond cleavage (Scheme 1(a)), 2,4 although these methods have disadvantages such as a requirement for specific equipment due to the instability of O 3 and the use of excess toxic and expensive reagents and/or solvents. To address these issues, research has been conducted on catalytic oxidative CC bond cleavage reactions based on second-or third-row transition metal salts and complexes (Ru, W, Os, In, Pd, Mo, Re, etc.) with NaIO 4 , NaClO, KHSO However, catalysis over multicomponent perovskites with cornersharing BO 6 octahedra has mainly been investigated for gasphase reactions (CO/CH 4 /NO oxidation), 10,11 and reports on liquid-phase organic reactions are limited. Therefore, we have focused on the liquid-phase catalysis of hexagonal perovskites with unique face-sharing octahedral units based on high valency metal species. During the course of our investigation on crystalline first-row metal oxide catalysts, 12 containing BaFeO 3−δ was found to act as an efficient heterogeneous catalyst for the aerobic oxidation of alkanes to the corresponding alcohols and ketones, in sharp contrast to Fe 3+ /Fe 2+ oxides. 17 Herein, we apply the superior oxidizing ability of a BaFeO 3−δ perovskite catalyst to aerobic oxidative CC bond cleavage. In the presence of BaFeO 3−δ various types of aromatic alkenes are converted to the corresponding carbonyl compounds using only O 2 , without the need for any additives. This study provides the first demonstration of an effective and reusable perovskite oxide catalyst for the oxidative CC bond cleavage of alkenes. ‡ Perovskite oxides including BaFeO 3−δ were synthesized by the sol-gel method using aspartic acid and/or malic acid and characterized by elemental analysis, powder X-ray diffraction (XRD), N 2 sorption, scanning electron microscopy (SEM), and X-ray photoelectron spectroscopy (XPS) (Fig. S1, ESI †). 17 First, the oxidative cleavage of styrene (1a) in benzotrifluoride (PhCF 3 ) using O 2 (0.1 MPa) as the sole oxidant in the presence of various types of perovskite oxide and simple oxide catalysts was examined ( Fig. 1). Three main products, namely, benzaldehyde (2a), styrene oxide (3a), and benzoic acid (4a), were formed. The reaction did not proceed in the absence of a catalyst under the reaction conditions employed. Among the catalysts tested, BaFeO 3−δ exhibited the highest catalytic activity and gave 2a with 68% selectivity in 34% total yield. Another high valency iron-containing SrFeO 3 also efficiently catalyzed the oxidation of 1a; however, the intrinsic activity per surface area of SrFeO 3 (20 m 2 g −1 ) was lower than that of BaFeO 3−δ (11 m 2 g −1 ). In addition, other Fe 3+ /Fe 2+ -containing perovskite and simple oxides such as CaFeO 2.5 , LaFeO 3 , Fe 2 O 3 , and Fe 3 O 4 were much less effective for the present oxidation than BaFeO 3−δ . These trends were also observed in the aerobic oxidation of adamantane with iron-containing oxides, 17 which indicates the high intrinsic oxidation activity of high valency iron-containing perovskite oxides. Other Mn-, Co-, Ni-, Cu-, and Ru-containing oxides (SrMnO 3 , BaMnO 3 , activated MnO 2 , BaCoO 3 , LaCoO 3 , Co 3 O 4 , LaNiO 3 , NiO, CuO, and BaRuO 3 ) were also inactive. In the presence of commercially-available Fe 3 O 4 nanoparticles and montmorillonite K10, which have been reported to be active for the oxidative cleavage of 1a to 2a, 18,19 no formation of 2a was observed under the reaction conditions employed. For the BaFeO 3−δ -catalyzed oxidation of 1a, the O 2 pressure had a strong effect on the selectivity to 2a and 3a, although the total yield remained unchanged ( Fig. 2(a)). The selectivity to 2a increased from 68% to 87% with an increase in the O 2 pressure from 0.1 MPa to 1.0 MPa (Fig. S2, ESI †), which indicates the concentration of O 2 is critical to the selective CC bond cleavage of 1a to 2a. The BaFeO 3−δ -catalyzed oxidation systems could be applied to the solvent-free oxidative cleavage of 1a to give 2a in 29% yield ( Fig. 2(a)). In this case, the reaction rate per surface area was 1.2 × 10 −3 μmol h −1 m −2 and much higher than those (2.0 × 10 −4 -4.8 × 10 −6 μmol h −1 m −2 ) of previously reported catalysts (Table S1 †). The total yield could also be increased to 71% by using tert-amyl alcohol (t-AmOH) as a solvent ( Fig. 2(a)). After the oxidation of 1a was completed under the conditions shown in Fig. 1, the used BaFeO 3−δ catalyst could be easily recovered from the reaction mixture by simple filtration. No significant leaching of Fe and Ba species into the filtrate was confirmed by inductively coupled plasma atomic emission spectroscopy (ICP-AES) analysis (Fe 0.04% and Ba 0.2% with respect to the fresh BaFeO 3−δ ). In addition, catalyst precursors (Fe(OAc) 2 , Ba(OAc) 2 , and a mixture of Fe(OAc) 2 and Ba(OAc) 2 ) were almost inactive for the oxidative CC bond cleavage of 1a to 2a (Fig. 1), which suggests that there was no contribution to the observed catalysis from iron View Article Online or barium species leached into the reaction solution. There was no significant difference in the XRD patterns and XPS spectra between the fresh and recovered catalysts, although the XRD peaks were slightly shifted, possibly due to the formation of oxygen-deficient BaFeO 3−δ (Fig. S3, ESI †). The recovered BaFeO 3−δ catalyst could then be reused without a significant change in the total yield or selectivity to each product: selectivity (1a/2a/3a = 68%/24%/8%) at 34% total yield (fresh), selectivity (1a/2a/3a = 65%/25%/10%) at 37% total yield (reused). Furthermore, the BaFeO 3−δ -catalyzed system was applicable to oxidative CC bond cleavage reactions of various types of alkenes with O 2 (1.0 MPa) as the sole oxidant (Table 1). Styrenes with electron-donating p-substituents (1b-1d) were converted into the corresponding aldehydes (2b-2d) as main products, and the formation of their carboxylic acids (4b and 4d) was observed in alkyl substituent-containing styrenes (entries 2-4). Oxidative cleavage of para-halogenated 4-fluorostyrene (1e) and 4-chlorostyrene (1f) also proceeded to afford the corresponding aldehydes (2e and 2f) and carboxylic acids (4e and 4f) (entries 5 and 6). On the other hand, p-nitrostyrene (1g) with a strong electron-withdrawing group was also oxidized to the corresponding aldehyde, although a longer reaction time was required (entry 7). It has been reported for Pt@Fe 2 O 3 and Pd(OAc) 2 systems that substrates with electron-withdrawing substituents are less active for the oxidative cleavage of styrene derivatives than those with electron-donating substituents. 20,21 Not only monosubstituted styrenes, but also disubstituted styrenes were also oxidized to the corresponding aldehydes and ketones. In the case of 1,2-disubstituted β-methylstyrenes, the trans-isomer (1h) was more reactive than the cis-isomer (1i), and the yields of 2a were 30% and 4% from 1h and 1i, respectively (entries 8 and 9). Similar stereospecificity for more electron-rich but sterically-hindered trans-stilbene (1j) and cis-stilbene (1k) was observed; however, the yields of 2a were low in comparison with 1h and 1i (entries 10 and 11). It has also been reported that trans-isomers are more active than cis-isomers in radical-mediated reactions. 22 1,1-Disubstited α-methylstyrene (1l) and 1,1-diphenylethylene (1m) were efficiently converted to acetophenone (2l and benzophenone (2m) in 75% and 70% yields, respectively (entries 12 and 13). In addition, the present system was applicable to the gram-scale reaction of 1m and 2.71 g of analytically pure 2m could be isolated (eqn (1)). The present system was not effective for the oxidative cleavage of aliphatic alkenes (1-octene, 2-octene, and allylbenzene), and such a limitation of scope is similar to previously reported systems based on first-row transition metals (Table S1 †). 2 (1) H 2 temperature-programmed reduction (H 2 -TPR) analysis was conducted to compare the intrinsic oxidation ability of BaFeO 3−δ to those of other iron-based perovskite oxides (Fig. S4, ESI †). The H 2 consumption per surface area below 573 K decreased in the order of BaFeO 3−δ (3.1 × 10 −2 mmol m −2 ) > SrFeO 3 (8.3 × 10 −3 mmol m −2 ) > LaFeO 3 (1.3 × 10 −3 mmol m −2 ) > CaFeO 2.5 (6.2 × 10 −4 mmol m −2 ), which is reasonable given the high reactivity of BaFeO 3-δ . The time course for the oxidative cleavage of 1a to 2a with 0.1 MPa of O 2 catalyzed by BaFeO 3−δ is shown in Fig. 2(b). The reaction proceeded with an induction period, and only 2a was observed at the initial stage of the reaction. The selectivity to 2a then gradually decreased with an increase in the selectivity to 3a and 4a. This induction period completely disappeared upon the addition of a radical initiator (TBHP; 0.3 equiv. relative to 1a, Fig. S5(a), ESI †), and the addition of a radical scavenger (2,2,6,6-tetramethylpiperidine 1-oxyl (TEMPO) or 2,6-di-tertbutyl-p-cresol (BHT); 1 equiv. relative to 1a) from the begging and in the middle of the reaction completely suppressed the progress of the reaction (Fig. S6, ESI †). A similar effect of a radical initiator and scavenger have been observed in the KSF montmorillonite systems, for which a radical-type mechanism has been proposed. 19 When BaFeO 3−δ was removed by hot filtration after 16 h, the reaction did not stop and proceeded in a similar way to that without the filtration step (Fig. S3, ESI †). Such phenomena were also reported for the aerobic oxidation of sulfides with MIL-101 catalysts via a radical-chain mechanism. 23 The reaction did not proceed at all under an Ar atmosphere ( Fig. S5(b), ESI †), which suggests that BaFeO 3−δ does not act as a stochiometric oxidant, but as a catalyst for the present oxidation. These results indicate the BaFeO 3−δ -catalyzed oxidation of 1a to 2a likely involves a radical mechanism where BaFeO 3−δ would activate 1a to form an active radical species such as a benzyl radical, which has been often suggested for Fe and Mn catalysts ( Fig. 3(a)). 2 The selectivity to 3a decreased with an increase in the O 2 pressure and the selectivity to 3a and 4a increased with a decrease in the selectivity to 2a; therefore, 3a would be formed by the aerobic epoxidation of 1a with 2a as a co-reductant. 24 At high O 2 pressure, radical species likely react with O 2 to form peroxy intermediates followed by rearrangement to 2a (Fig. 3(a), pathway A). On the other hand, at low O 2 pressure radical species would attack hydrogen atom of 2a followed by reaction with O 2 to form a peracid, which can promote the epoxidation of 1a to 3a with the co-production of 4a (Fig. 3(a), pathway B). 4a is also formed by the aerobic oxidation of 2a (Fig. 3(a), pathway C). Density functional theory (DFT) calculations were performed to confirm the possible reaction pathways for the formation of 2a, 3a, and 4a from 1a and O 2 (Fig. 3(b)). The reaction of 1a with O 2 to form an intermediate with a four-membered dioxyethane moiety was calculated to be endothermic by 26 kJ mol −1 ; therefore, the pathway via this intermediate proposed for Co, Cu, and Cr catalysts would be View Article Online thermodynamically unfavorable. 2,25,26 On the other hand, not only the CC bond cleavage oxidation of 1a with O 2 to 2a and HCHO (exothermic by −303 kJ mol −1 ), but also the epoxidation of 1a to 3a with peroxybenzoic acid from 2a and O 2 (exothermic by −271 kJ mol −1 ) were thermodynamically favourable, which is in good agreement with the proposed reaction pathways. In conclusion, the high valency iron-based BaFeO 3−δ perovskite oxide could act as a heterogeneous catalyst for the aerobic oxidative CC bond cleavage of various aromatic alkenes to the corresponding carbonyl compounds with O 2 as the sole oxidant. This study was funded in part by JSPS KAKENHI Grant numbers JP20J11604 and JP18H01786, JST A-STEP (JPMJTR20TG), and the "Creation of Life Innovative Materials for Interdisciplinary and International Researcher Development" program of MEXT. Author contributions S. S. performed the experimental investigation and the data analysis with the help of K. K. S. S. and K. K. wrote the paper. The draft was reviewed by S. S., K. K., and M. H. Conflicts of interest There are no conflicts to declare. Notes and references ‡ While oxidative cleavage of styrene with O 2 has been reported for some heterogeneous iron-based catalytic systems, there are only two examples of all-inorganic heterogeneous catalysts such as KSF 19 montmorillonite and hollow Fe 3 O 4 nanoshells. 18
2021-05-07T00:03:42.309Z
2021-04-14T00:00:00.000
{ "year": 2021, "sha1": "4c91a01b86e1d34af3b68c0471548e3d949ef9af", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2021/cy/d1cy00245g", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "7d6a9d680cd79a9174b58abc031a621ec9f34718", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Chemistry" ] }